[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2022-09-14 Thread yuyanlei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17605091#comment-17605091
 ] 

yuyanlei commented on HDFS-13596:
-

hi [Hui Fei|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=ferhui] 
,I had the same problem recently,

I test Hadoop2.7.2 rolling upgrade Hadoop3.3.4 after the downgrade test,

Error reported when degrading Namenode 
startup:ArrayIndexOutOfBoundsException:536870913,

So I merged COMMIT in Hadoop2.7.2:Fix potential FSImage 
corruption(8a41edb089fbdedc5e7d9a2aeec63d126afea49f),

However, the startup still failed with an error:NullPointerException,

The Owner and Group of the HDFS directory are null(The number of test data 
blocks is 68905183),

Later I found Hadoop2.7.2:

enum PermissionStatusFormat implements LongBitFormat.Enum {
    MODE(null, 16),
    GROUP(MODE.BITS, 24),
    USER(GROUP.BITS, 24);

And Hadoop3.3.4 and Fix potential FSImage corruption :

enum PermissionStatusFormat implements LongBitFormat.Enum {
    MODE(null, 16),
    GROUP(MODE.BITS, 25),
    USER(GROUP.BITS, 23);

After I changed the GROUP and USER in Hadoop2.7.2 to 24, 24, the downgrade was 
successful。

This 
commit:https://github.com/lucasaytt/hadoop/commit/8a41edb089fbdedc5e7d9a2aeec63d126afea49f(Fix
 potential FSImage corruption),Can it be merged into Hadoop2.7.2?   Is there 
any hidden danger?

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hui Fei
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch, 
> HDFS-13596.009.patch, HDFS-13596.010.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at 

[jira] [Commented] (HDFS-16772) refreshHostsReader should use the new configuration

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17605090#comment-17605090
 ] 

ASF GitHub Bot commented on HDFS-16772:
---

ZanderXu opened a new pull request, #4890:
URL: https://github.com/apache/hadoop/pull/4890

   ### Description of PR
   `refreshHostsReader` should use the latest configuration and the related 
code as bellow:
   
   ```
   /** Reread include/exclude files. */
   private void refreshHostsReader(Configuration conf) throws IOException {
 if (conf == null) {
   conf = new HdfsConfiguration();
   // BUG here
   this.hostConfigManager.setConf(conf);
 }
 this.hostConfigManager.refresh();
   } 
   ```
   




> refreshHostsReader should use the new configuration
> ---
>
> Key: HDFS-16772
> URL: https://issues.apache.org/jira/browse/HDFS-16772
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> `refreshHostsReader` should use the latest configuration.
> And the current code as bellow:
> {code:java}
> /** Reread include/exclude files. */
> private void refreshHostsReader(Configuration conf) throws IOException {
>   if (conf == null) {
> conf = new HdfsConfiguration();
> // BUG here
> this.hostConfigManager.setConf(conf);
>   }
>   this.hostConfigManager.refresh();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16772) refreshHostsReader should use the new configuration

2022-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16772:
--
Labels: pull-request-available  (was: )

> refreshHostsReader should use the new configuration
> ---
>
> Key: HDFS-16772
> URL: https://issues.apache.org/jira/browse/HDFS-16772
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> `refreshHostsReader` should use the latest configuration.
> And the current code as bellow:
> {code:java}
> /** Reread include/exclude files. */
> private void refreshHostsReader(Configuration conf) throws IOException {
>   if (conf == null) {
> conf = new HdfsConfiguration();
> // BUG here
> this.hostConfigManager.setConf(conf);
>   }
>   this.hostConfigManager.refresh();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16772) refreshHostsReader should use the new configuration

2022-09-14 Thread ZanderXu (Jira)
ZanderXu created HDFS-16772:
---

 Summary: refreshHostsReader should use the new configuration
 Key: HDFS-16772
 URL: https://issues.apache.org/jira/browse/HDFS-16772
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


`refreshHostsReader` should use the latest configuration.

And the current code as bellow:
{code:java}
/** Reread include/exclude files. */
private void refreshHostsReader(Configuration conf) throws IOException {
  if (conf == null) {
conf = new HdfsConfiguration();
// BUG here
this.hostConfigManager.setConf(conf);
  }
  this.hostConfigManager.refresh();
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16771) JN should tersely print logs about NewerTxnIdException

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17605066#comment-17605066
 ] 

ASF GitHub Bot commented on HDFS-16771:
---

ZanderXu commented on code in PR #4882:
URL: https://github.com/apache/hadoop/pull/4882#discussion_r971454489


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java:
##
@@ -524,9 +524,6 @@ public void 
selectInputStreams(Collection streams,
 selectRpcInputStreams(rpcStreams, fromTxnId, onlyDurableTxns);
 streams.addAll(rpcStreams);
 return;
-  } catch (NewerTxnIdException ntie) {
-// normal situation, we requested newer IDs than any journal has. no 
new streams
-return;

Review Comment:
   copy, sir. I will add one UT to test it.





> JN should tersely print logs about NewerTxnIdException
> --
>
> Key: HDFS-16771
> URL: https://issues.apache.org/jira/browse/HDFS-16771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> JournalNode should tersely print some logs about NewerTxnIdException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16767) RBF: Support observer node from Router-Based Federation

2022-09-14 Thread Owen O'Malley (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HDFS-16767.
--
Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

I just committed this. Thanks, Simba!

> RBF: Support observer node from Router-Based Federation 
> 
>
> Key: HDFS-16767
> URL: https://issues.apache.org/jira/browse/HDFS-16767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Enable routers to direct read calls to observer namenodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16767) RBF: Support observer node from Router-Based Federation

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17605004#comment-17605004
 ] 

ASF GitHub Bot commented on HDFS-16767:
---

omalley closed pull request #4127: HDFS-16767. RBF: Support observer node from 
Router-Based Federation
URL: https://github.com/apache/hadoop/pull/4127




> RBF: Support observer node from Router-Based Federation 
> 
>
> Key: HDFS-16767
> URL: https://issues.apache.org/jira/browse/HDFS-16767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Enable routers to direct read calls to observer namenodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16771) JN should tersely print logs about NewerTxnIdException

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17604946#comment-17604946
 ] 

ASF GitHub Bot commented on HDFS-16771:
---

xkrogen commented on code in PR #4882:
URL: https://github.com/apache/hadoop/pull/4882#discussion_r971240283


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java:
##
@@ -524,9 +524,6 @@ public void 
selectInputStreams(Collection streams,
 selectRpcInputStreams(rpcStreams, fromTxnId, onlyDurableTxns);
 streams.addAll(rpcStreams);
 return;
-  } catch (NewerTxnIdException ntie) {
-// normal situation, we requested newer IDs than any journal has. no 
new streams
-return;

Review Comment:
   So it seems that this logic was never working? I guess that means the tests 
added in #4560 aren't working properly, we probably need to also confirm that 
in the "normal" case (where `sinceTxId == highestTxId + 1`), 
`selectStreamingInputStreams` is never called.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeRpcServer.java:
##
@@ -114,6 +114,7 @@ public class JournalNodeRpcServer implements 
QJournalProtocol,
 .setVerbose(false)
 .build();
 
+this.server.addTerseExceptions(NewerTxnIdException.class);

Review Comment:
   good catch, we should probably add `CacheMissException` as well, WDYT?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java:
##
@@ -751,7 +751,11 @@ public GetJournaledEditsResponseProto 
getJournaledEdits(long sinceTxId,
   "it via " + DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY);
 }
 long highestTxId = getHighestWrittenTxId();
-if (sinceTxId > highestTxId) {
+if (sinceTxId == highestTxId + 1) {
+  // Requested edits that don't exist yet; short-circuit the cache here
+  metrics.rpcEmptyResponses.incr();
+  return 
GetJournaledEditsResponseProto.newBuilder().setTxnCount(0).build();
+} else if (sinceTxId > highestTxId + 1) {
   // Requested edits that don't exist yet and is newer than highestTxId.

Review Comment:
   Can you add some more detail in the two comments here to make it more clear 
why we treat the `+ 1` case differently?





> JN should tersely print logs about NewerTxnIdException
> --
>
> Key: HDFS-16771
> URL: https://issues.apache.org/jira/browse/HDFS-16771
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> JournalNode should tersely print some logs about NewerTxnIdException.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16757) Add a new method copyBlockCrossNamespace to DataNode

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17604771#comment-17604771
 ] 

ASF GitHub Bot commented on HDFS-16757:
---

hadoop-yetus commented on PR #4888:
URL: https://github.com/apache/hadoop/pull/4888#issuecomment-1246873645

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ HDFS-2139 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 21s |  |  HDFS-2139 passed  |
   | +1 :green_heart: |  compile  |   6m  8s |  |  HDFS-2139 passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   6m  4s |  |  HDFS-2139 passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  HDFS-2139 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  HDFS-2139 passed  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  HDFS-2139 passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  HDFS-2139 passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m  8s |  |  HDFS-2139 passed  |
   | +1 :green_heart: |  shadedclient  |  22m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   6m  1s |  |  the patch passed  |
   | -1 :x: |  javac  |   6m  1s | 
[/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4888/1/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 3 new + 1135 unchanged - 0 
fixed = 1138 total (was 1135)  |
   | +1 :green_heart: |  compile  |   5m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |   5m 44s |  |  the patch passed  |
   | -1 :x: |  javac  |   5m 44s | 
[/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4888/1/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs-project-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 
with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 3 new + 
1104 unchanged - 0 fixed = 1107 total (was 1104)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 20s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4888/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 16 new + 561 unchanged - 3 fixed = 
577 total (was 564)  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 56s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4888/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 

[jira] [Commented] (HDFS-16770) [Documentation] RBF: Duplicate statement to be removed for better readabilty

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17604070#comment-17604070
 ] 

ASF GitHub Bot commented on HDFS-16770:
---

prasad-acit commented on PR #4881:
URL: https://github.com/apache/hadoop/pull/4881#issuecomment-1246741083

   Thanks @goiri for quick review & merge.




> [Documentation] RBF: Duplicate statement to be removed for better readabilty
> 
>
> Key: HDFS-16770
> URL: https://issues.apache.org/jira/browse/HDFS-16770
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Both the below 2 statements gives the same meaning, later one can be removed.
> The Router monitors the local NameNode and its state and heartbeats to the 
> State Store.
> The Router monitors the local NameNode and heartbeats the state to the State 
> Store.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14831) Downgrade Failed from 3.2.0 to 2.7 because of incompatible stringtable

2022-09-14 Thread yuyanlei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17604019#comment-17604019
 ] 

yuyanlei commented on HDFS-14831:
-

After testing Hadoop2.7.2 rolling upgrade Hadoop3.3.4, I conducted the 
degradation test. I encountered a problem when degrading Namenode, and an error 
was reported .
Check Issuse and find HDFS-14831 similar to the problem I encountered
I merge the 
COMMIT(https://github.com/lucasaytt/hadoop/commit/8a41edb089fbdedc5e7d9a2aeec63d126afea49f)
 mentioned in https://issues.apache.org/jira/browse/HDFS-14831 in Hadoop2.7.2
However, in the degraded Namenode (2.7.2), write operations cannot be performed 
and an error is reported  
The Owner and Group of the directory are displayed as Empty on the Web page
Subsequent discovery: GROUP and USER in PermissionStatusFormat in 2.7.2 are 25 
and 23, respectively, while those in 3.3.4 are 24 and 24 .
I changed GROUP and USER to 24 and 24 in 2.7.2, which can be degraded 
successfully
This 
commit:https://github.com/lucasaytt/hadoop/commit/8a41edb089fbdedc5e7d9a2aeec63d126afea49f,Can
 it be merged into Hadoop2.7.2?   Is there any hidden danger?

 

> Downgrade Failed from 3.2.0 to 2.7 because of incompatible stringtable 
> ---
>
> Key: HDFS-14831
> URL: https://issues.apache.org/jira/browse/HDFS-14831
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0, 3.1.3
>Reporter: Hui Fei
>Assignee: Hui Fei
>Priority: Major
>
> Mentioned on HDFS-13596
> Incompatible StringTable changes cause downgrade from 3.2.0 to 2.7.2 failed
> commit message as follow, but issue not found
> {quote}
> commit 8a41edb089fbdedc5e7d9a2aeec63d126afea49f
> Author: Vinayakumar B 
> Date:   Mon Oct 15 15:48:26 2018 +0530
> Fix potential FSImage corruption. Contributed by Daryn Sharp.
> {quote} 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16757) Add a new method copyBlockCrossNamespace to DataNode

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603986#comment-17603986
 ] 

ASF GitHub Bot commented on HDFS-16757:
---

ZanderXu opened a new pull request, #4888:
URL: https://github.com/apache/hadoop/pull/4888

   ### Description of PR
   Add a new method copyBlockCrossNamespace to Datanode to support fast copy.
   
   




> Add a new method copyBlockCrossNamespace to DataNode
> 
>
> Key: HDFS-16757
> URL: https://issues.apache.org/jira/browse/HDFS-16757
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>
> Add a new method copyBlockCrossNamespace in DataTransferProtocol at the 
> DataNode Side.
> This method will copy a source block from one namespace to a target block 
> from a different namespace. If the target DN is the same with the current DN, 
> this method will copy the block via HardLink. If the target DN is different 
> with the current DN, this method will copy the block via TransferBlock.
> This method will contains some parameters:
>  * ExtendedBlock sourceBlock
>  * Token sourceBlockToken
>  * ExtendedBlock targetBlock
>  * Token targetBlockToken
>  * DatanodeInfo targetDN



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16757) Add a new method copyBlockCrossNamespace to DataNode

2022-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16757:
--
Labels: pull-request-available  (was: )

> Add a new method copyBlockCrossNamespace to DataNode
> 
>
> Key: HDFS-16757
> URL: https://issues.apache.org/jira/browse/HDFS-16757
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>
> Add a new method copyBlockCrossNamespace in DataTransferProtocol at the 
> DataNode Side.
> This method will copy a source block from one namespace to a target block 
> from a different namespace. If the target DN is the same with the current DN, 
> this method will copy the block via HardLink. If the target DN is different 
> with the current DN, this method will copy the block via TransferBlock.
> This method will contains some parameters:
>  * ExtendedBlock sourceBlock
>  * Token sourceBlockToken
>  * ExtendedBlock targetBlock
>  * Token targetBlockToken
>  * DatanodeInfo targetDN



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8789) Block Placement policy migrator

2022-09-14 Thread ZanderXu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603955#comment-17603955
 ] 

ZanderXu commented on HDFS-8789:


After quickly looking HDFS-14053, it can migrate the old blocks to satisfy the 
new block placement policy. But if all the old blocks are migrated by namenode, 
it will affect the processing performance of Namenode even if we can limit the 
speed of migration. It would be nice to have a peripheral migration tool that 
can migrate old blocks automatically, efficiently, and with minimal impact.

Beside this migrator, do you have disabled migrating the blocks after namenode 
become active? [~sodonnell] . Because after namenode become active, it will 
processMisReplicatedBlocks. 

> Block Placement policy migrator
> ---
>
> Key: HDFS-8789
> URL: https://issues.apache.org/jira/browse/HDFS-8789
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Major
> Attachments: HDFS-8789-trunk-STRAWMAN-v1.patch
>
>
> As we start to add new block placement policies to HDFS, it will be necessary 
> to have a robust tool that can migrate HDFS blocks between placement 
> policies. This jira is for the design and implementation of that tool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16767) RBF: Support observer node from Router-Based Federation

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603950#comment-17603950
 ] 

ASF GitHub Bot commented on HDFS-16767:
---

hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1246402899

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   6m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 53s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   6m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   6m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 17s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/40/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 3 new + 145 unchanged - 1 fixed = 
148 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 377m 43s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  40m 49s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 569m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/40/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 496469b3a85a 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0e67a3b9f324720039eb405d6b3d31a1dc05eb0 |
   | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 |
   

[jira] [Commented] (HDFS-8789) Block Placement policy migrator

2022-09-14 Thread ZanderXu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603948#comment-17603948
 ] 

ZanderXu commented on HDFS-8789:


Thanks [~sodonnell] for your timely comment. I will look into HDFS-14053. Thanks

> Block Placement policy migrator
> ---
>
> Key: HDFS-8789
> URL: https://issues.apache.org/jira/browse/HDFS-8789
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Major
> Attachments: HDFS-8789-trunk-STRAWMAN-v1.patch
>
>
> As we start to add new block placement policies to HDFS, it will be necessary 
> to have a robust tool that can migrate HDFS blocks between placement 
> policies. This jira is for the design and implementation of that tool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8789) Block Placement policy migrator

2022-09-14 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603945#comment-17603945
 ] 

Stephen O'Donnell commented on HDFS-8789:
-

I don't think this tool is needed as er have HDFS-14053 committed since this 
Jira was opened, which allows you to migrate blocks on a path by path basis.

There are no plans from our side to move this forward.

> Block Placement policy migrator
> ---
>
> Key: HDFS-8789
> URL: https://issues.apache.org/jira/browse/HDFS-8789
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Major
> Attachments: HDFS-8789-trunk-STRAWMAN-v1.patch
>
>
> As we start to add new block placement policies to HDFS, it will be necessary 
> to have a robust tool that can migrate HDFS blocks between placement 
> policies. This jira is for the design and implementation of that tool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8789) Block Placement policy migrator

2022-09-14 Thread ZanderXu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603933#comment-17603933
 ] 

ZanderXu commented on HDFS-8789:


We plan to use the upgrade domain in our prod environment, so this tool will be 
necessary for us before deploying the upgrade domain. 

After looking into the upgradeDomain(UD) and this migrator tool, maybe there 
are some improvements we can do:
 * After deploying the UD, namenode should not try to migrate the old blocks 
after become the active during processMisReplicatesAsync. We should migrate the 
old blocks by this migrator tool.
 * Maybe we can integrate this migrator to mover. And we can just simply add 
one new processFile method in mover to achieve the goals that migrating the 
blocks that didn't satisfy the block placement policy.

[~weichiu] [~sodonnell] [~ctrezzo] Do you have plans to push this PR forward?  
If have, I have done some works and interested in carrying it forward.

> Block Placement policy migrator
> ---
>
> Key: HDFS-8789
> URL: https://issues.apache.org/jira/browse/HDFS-8789
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Major
> Attachments: HDFS-8789-trunk-STRAWMAN-v1.patch
>
>
> As we start to add new block placement policies to HDFS, it will be necessary 
> to have a robust tool that can migrate HDFS blocks between placement 
> policies. This jira is for the design and implementation of that tool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16767) RBF: Support observer node from Router-Based Federation

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603923#comment-17603923
 ] 

ASF GitHub Bot commented on HDFS-16767:
---

hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1246349274

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   6m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 56s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   6m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   6m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 17s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/39/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 3 new + 145 unchanged - 1 fixed = 
148 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   2m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 341m 45s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  42m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/39/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 534m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/39/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux cbaeb3df24f7 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 

[jira] [Commented] (HDFS-16767) RBF: Support observer node from Router-Based Federation

2022-09-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603922#comment-17603922
 ] 

ASF GitHub Bot commented on HDFS-16767:
---

hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1246347563

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   5m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   6m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   5m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 17s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/38/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 2 new + 145 unchanged - 1 fixed = 
147 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 389m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/38/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  37m 48s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 568m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecovery |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/38/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 1fb4906c1145 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 
01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 723e9063e174589d78553a67a274291aed9f538c