[jira] [Comment Edited] (HDFS-16060) There is an inconsistent between replicas of datanodes when hardware is abnormal

2022-05-11 Thread Daniel Ma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535888#comment-17535888
 ] 

Daniel Ma edited comment on HDFS-16060 at 5/12/22 6:27 AM:
---

[~ferhui] 

Thanks for your report, I have encountered the similiar issue.

There are some doubts I wanna confirm:

1-Does write operation succeed when the Exception info cast on Datanode log?

2-Does read operation also fail?


was (Author: daniel ma):
[~ferhui] 

Thanks for your report, I have encountered the similiar issue.

There are some doubts I wanna confirm:

1-Does write operation succeed even with the Exception info on Datanode log?

2-Does read operation also fail?

> There is an inconsistent between replicas of datanodes when hardware is 
> abnormal
> 
>
> Key: HDFS-16060
> URL: https://issues.apache.org/jira/browse/HDFS-16060
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Hui Fei
>Priority: Major
>
> We find the following case on production env.
>  * replicas of the same block are stored in dn1, dn2.
>  * replicas of dn1 and dn2 are different
>  * Verify meta & data for replica successfully on dn1, and the same on dn2.
> User code is just copyfromlocal.
> Find some error log on datanode at first
> {quote}
> 2021-05-27 04:54:20,471 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block 
> BP-1453431581-x.x.x.x-1531302155027:blk_13892199285_12902824176 from 
> /y.y.y.y:47960
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_-1760730985_129 at 0 exp: 37939694 got: -1180138774
>  at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
>  at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:438)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:582)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:885)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
>  at java.lang.Thread.run(Thread.java:748)
> {quote}
> After this, new pipeline is created and then wrong data and meta written in 
> disk file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16060) There is an inconsistent between replicas of datanodes when hardware is abnormal

2022-05-11 Thread Daniel Ma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535888#comment-17535888
 ] 

Daniel Ma edited comment on HDFS-16060 at 5/12/22 6:22 AM:
---

[~ferhui] 

Thanks for your report, I have encountered the similiar issue.

There are some doubts I wanna confirm:

1-Does write operation succeed even with the Exception info on Datanode log?

2-Does read operation also fail?


was (Author: daniel ma):
[~ferhui] 

Thanks for your report, I have encountered the similiar issue.

There are some doubts I wanna confirm:

1-Does write operation succeed even with the Exception info on Datanode log?

2-Does read operation fail owing to inconsistent replica data and its meta?

> There is an inconsistent between replicas of datanodes when hardware is 
> abnormal
> 
>
> Key: HDFS-16060
> URL: https://issues.apache.org/jira/browse/HDFS-16060
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Hui Fei
>Priority: Major
>
> We find the following case on production env.
>  * replicas of the same block are stored in dn1, dn2.
>  * replicas of dn1 and dn2 are different
>  * Verify meta & data for replica successfully on dn1, and the same on dn2.
> User code is just copyfromlocal.
> Find some error log on datanode at first
> {quote}
> 2021-05-27 04:54:20,471 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block 
> BP-1453431581-x.x.x.x-1531302155027:blk_13892199285_12902824176 from 
> /y.y.y.y:47960
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_-1760730985_129 at 0 exp: 37939694 got: -1180138774
>  at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
>  at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:438)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:582)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:885)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
>  at java.lang.Thread.run(Thread.java:748)
> {quote}
> After this, new pipeline is created and then wrong data and meta written in 
> disk file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16060) There is an inconsistent between replicas of datanodes when hardware is abnormal

2022-05-11 Thread Daniel Ma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535888#comment-17535888
 ] 

Daniel Ma commented on HDFS-16060:
--

[~ferhui] 

Thanks for your report, I have encountered the similiar issue.

There are some doubts I wanna confirm:

1-Does write operation succeed even with the Exception info on Datanode log?

2-Does read operation fail owing to inconsistent replica data and its meta?

> There is an inconsistent between replicas of datanodes when hardware is 
> abnormal
> 
>
> Key: HDFS-16060
> URL: https://issues.apache.org/jira/browse/HDFS-16060
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Hui Fei
>Priority: Major
>
> We find the following case on production env.
>  * replicas of the same block are stored in dn1, dn2.
>  * replicas of dn1 and dn2 are different
>  * Verify meta & data for replica successfully on dn1, and the same on dn2.
> User code is just copyfromlocal.
> Find some error log on datanode at first
> {quote}
> 2021-05-27 04:54:20,471 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block 
> BP-1453431581-x.x.x.x-1531302155027:blk_13892199285_12902824176 from 
> /y.y.y.y:47960
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_-1760730985_129 at 0 exp: 37939694 got: -1180138774
>  at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
>  at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
>  at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:438)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:582)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:885)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:801)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
>  at java.lang.Thread.run(Thread.java:748)
> {quote}
> After this, new pipeline is created and then wrong data and meta written in 
> disk file.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16456) EC: Decommission a rack with only on dn will fail when the rack number is equal with replication

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16456?focusedWorklogId=769424&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769424
 ]

ASF GitHub Bot logged work on HDFS-16456:
-

Author: ASF GitHub Bot
Created on: 12/May/22 05:25
Start Date: 12/May/22 05:25
Worklog Time Spent: 10m 
  Work Description: jojochuang opened a new pull request, #4304:
URL: https://github.com/apache/hadoop/pull/4304

   
   
   (cherry picked from commit cee8c62498f55794f911ce62edfd4be9e88a7361)
   
Conflicts:

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
   
   Change-Id: I5da7693d6176315f126f1b5107de2bb100ee9778
   
   
   
   ### Description of PR
   (HDFS-16456 was merged in trunk. It has conflict in branch-3.3 because of 
HDFS-15263. It looks like HDFS-15263 could potentially change the behavior and 
surprise users, so decided to backport without HDFS-15263)
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 769424)
Time Spent: 2h 20m  (was: 2h 10m)

> EC: Decommission a rack with only on dn will fail when the rack number is 
> equal with replication
> 
>
> Key: HDFS-16456
> URL: https://issues.apache.org/jira/browse/HDFS-16456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-16456.001.patch, HDFS-16456.002.patch, 
> HDFS-16456.003.patch, HDFS-16456.004.patch, HDFS-16456.005.patch, 
> HDFS-16456.006.patch, HDFS-16456.007.patch, HDFS-16456.008.patch, 
> HDFS-16456.009.patch, HDFS-16456.010.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
>  # Enable EC policy, such as RS-6-3-1024k.
>  # The rack number in this cluster is equal with or less than the replication 
> number(9)
>  # A rack only has one DN, and decommission this DN.
> The root cause is in 
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
> give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
> the maxNodesPerRack is 1, which means each rack can only be chosen one 
> datanode.
> {code:java}
>   protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
>...
>     // If more replicas than racks, evenly spread the replicas.
>     // This calculation rounds up.
>     int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> return new int[] {numOfReplicas, maxNodesPerRack};
>   } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  
> When we decommission one dn which is only one node in its rack, the 
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
> will throw NotEnoughReplicasException, but the exception will not be caught 
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will 
> return the total rack number contains the invalid rack, and 
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false 
> and it will also cause decommission fail.
> {code:java}
>   public BlockPlacementStatus verifyBlockPlacement(DatanodeInfo[] locs,
>       int numberOfReplicas) {
>     if (locs == null)
>       locs = DatanodeDescriptor.EMPTY_ARRAY;
>     if (!clusterMap.hasClusterEverBeenMultiRack()) {
>       // only one rack
>       return new BlockPlacementStatusDefault(1, 1, 1);
>     }
>     // Count locations on different racks.
>     Set racks = new HashSet<>();
>     for (DatanodeInfo dn : locs) {
>       racks.add(dn.getNetworkLocation());
>     }
>     return new BlockPlacementStatusDefault(racks.size(), numberOfReplicas,
>         clusterMap.getNumOfRacks());
>   } {code}
> {code:java}
>   public boolean isPlacementPolicyS

[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=769414&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769414
 ]

ASF GitHub Bot logged work on HDFS-16540:
-

Author: ASF GitHub Bot
Created on: 12/May/22 04:20
Start Date: 12/May/22 04:20
Worklog Time Spent: 10m 
  Work Description: saintstack commented on PR #4246:
URL: https://github.com/apache/hadoop/pull/4246#issuecomment-1124514541

   Removing a space had us run more tests and 4 tests failed instead of 44 on 
previous run.
   
   Below is the change in last run.
   
   ```
   From eb904f3adaa55d44aa6494ad116344317e9ec882 Mon Sep 17 00:00:00 2001
   From: stack 
   Date: Wed, 11 May 2022 15:42:18 -0700
   Subject: [PATCH] Remove a space at end of line inside a comment 

Issue Time Tracking
---

Worklog Id: (was: 769414)
Time Spent: 5h 50m  (was: 5h 40m)

> Data locality is lost when DataNode pod restarts in kubernetes 
> ---
>
> Key: HDFS-16540
> URL: https://issues.apache.org/jira/browse/HDFS-16540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.2
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We have HBase RegionServer and Hdfs DataNode running in one pod. When the pod 
> restarts, we found that data locality is lost after we do a major compaction 
> of hbase regions. After some debugging, we found that upon pod restarts, its 
> ip changes. In DatanodeManager, maps like networktopology are updated with 
> the new info. host2DatanodeMap is not updated accordingly. When hdfs client 
> with the new ip tries to find a local DataNode, it fails. 
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=769407&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769407
 ]

ASF GitHub Bot logged work on HDFS-16540:
-

Author: ASF GitHub Bot
Created on: 12/May/22 03:15
Start Date: 12/May/22 03:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4246:
URL: https://github.com/apache/hadoop/pull/4246#issuecomment-1124486913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | -1 :x: |  mvninstall  |   6m 15s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/branch-mvninstall-root.txt)
 |  root in branch-3.3 failed.  |
   | -1 :x: |  compile  |   0m 30s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in branch-3.3 failed.  |
   | -0 :warning: |  checkstyle  |   0m 27s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   4m  0s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  29m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed  |
   | -1 :x: |  javac  |   1m 14s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project_hadoop-hdfs generated 567 new + 0 unchanged - 0 fixed = 
567 total (was 0)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 52 new + 0 unchanged - 
0 fixed = 52 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 189m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 270m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4246 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 691ecf7100cf 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personal

[jira] [Resolved] (HDFS-16465) Remove redundant strings.h inclusions

2022-05-11 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16465.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4279 to trunk.

> Remove redundant strings.h inclusions
> -
>
> Key: HDFS-16465
> URL: https://issues.apache.org/jira/browse/HDFS-16465
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *strings.h* was included in a bunch of C/C++ files and are redundant in its 
> usage. Also, strings.h is not available on Windows and thus isn't 
> cross-platform compatible. Thus, these inclusions of strings.h must be 
> removed.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16576) Remove unused Imports in Hadoop HDFS project

2022-05-11 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17535817#comment-17535817
 ] 

JiangHua Zhu commented on HDFS-16576:
-

It looks like what is described here is somewhat simple.

> Remove unused Imports in Hadoop HDFS project
> 
>
> Key: HDFS-16576
> URL: https://issues.apache.org/jira/browse/HDFS-16576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16525?focusedWorklogId=769348&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769348
 ]

ASF GitHub Bot logged work on HDFS-16525:
-

Author: ASF GitHub Bot
Created on: 11/May/22 23:22
Start Date: 11/May/22 23:22
Worklog Time Spent: 10m 
  Work Description: singer-bin commented on PR #4122:
URL: https://github.com/apache/hadoop/pull/4122#issuecomment-1124378338

   @ferhui The error seems unrelated, can you help review?




Issue Time Tracking
---

Worklog Id: (was: 769348)
Time Spent: 1h 40m  (was: 1.5h)

> System.err should be used when error occurs in multiple methods in DFSAdmin 
> class
> -
>
> Key: HDFS-16525
> URL: https://issues.apache.org/jira/browse/HDFS-16525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin
>Affects Versions: 3.3.2
>Reporter: yanbin.zhang
>Assignee: yanbin.zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> System.err should be used when error occurs in multiple methods in DFSAdmin 
> class,as follows:
> {code:java}
> //DFSAdmin#refreshCallQueue
> ...
> try{
>   proxy.getProxy().refreshCallQueue();
>   System.out.println("Refresh call queue successful for "
>   + proxy.getAddress());
> }catch (IOException ioe){
>   System.out.println("Refresh call queue failed for "
>   + proxy.getAddress());
>   exceptions.add(ioe);
> }
> ...
> {code}
> The test method closed first in TestDFSAdminWithHA also needs to be 
> modified,otherwise an error will be reported,similar to the following:
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> TestDFSAdminWithHA.testRefreshCallQueueNN1DownNN2Up:726->assertOutputMatches:77
>  Expected output to match 'Refresh call queue failed for.*
> Refresh call queue successful for.*
> ' but err_output was:
> Refresh call queue failed for localhost/127.0.0.1:12876
> refreshCallQueue: Call From h110/10.1.234.110 to localhost:12876 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused and output was:
> Refresh call queue successful for localhost/127.0.0.1:12878{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=769254&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769254
 ]

ASF GitHub Bot logged work on HDFS-16540:
-

Author: ASF GitHub Bot
Created on: 11/May/22 19:22
Start Date: 11/May/22 19:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4246:
URL: https://github.com/apache/hadoop/pull/4246#issuecomment-1124201434

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 27s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  27m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/9/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  34m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 41s |  |  ASF License check generated no 
output?  |
   |  |   | 150m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestECAdmin |
   |   | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithDFSAdmin |
   |   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.TestBlockStoragePolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4246 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 80c88472574d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 53fdbf60a26f18341150743f860b0713ca2d632a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/9/testReport/ |
   | Max. process+thread count | 1567 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4246/9/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 769254)
Time Spent: 5.5h  (was: 5h 20m)

> Data locality is lost when DataNo

[jira] [Work logged] (HDFS-16465) Remove redundant strings.h inclusions

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16465?focusedWorklogId=769213&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-769213
 ]

ASF GitHub Bot logged work on HDFS-16465:
-

Author: ASF GitHub Bot
Created on: 11/May/22 17:34
Start Date: 11/May/22 17:34
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra merged PR #4279:
URL: https://github.com/apache/hadoop/pull/4279




Issue Time Tracking
---

Worklog Id: (was: 769213)
Time Spent: 2h 40m  (was: 2.5h)

> Remove redundant strings.h inclusions
> -
>
> Key: HDFS-16465
> URL: https://issues.apache.org/jira/browse/HDFS-16465
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *strings.h* was included in a bunch of C/C++ files and are redundant in its 
> usage. Also, strings.h is not available on Windows and thus isn't 
> cross-platform compatible. Thus, these inclusions of strings.h must be 
> removed.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16577) Let administrator override connection details when registering datanodes

2022-05-11 Thread Jira
Teo Klestrup Röijezon created HDFS-16577:


 Summary: Let administrator override connection details when 
registering datanodes
 Key: HDFS-16577
 URL: https://issues.apache.org/jira/browse/HDFS-16577
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Affects Versions: 3.2.2
Reporter: Teo Klestrup Röijezon


Currently (as of 3.2.2, but reading through the release notes this doesn't seem 
to have changed since then) DataNodes use the same properties for deciding 
which port to bind each service to, as for deciding which ports are included in 
the `DatanodeRegistration` sent to the NameNode. Further, NameNodes overwrite 
the DataNode's IP address with the incoming address during registration.

Both of these prevent external users from connecting to DataNodes that are 
hosted behind some sort of NAT (such as Kubernetes).

I have created a spike branch 
([https://github.com/stackabletech/hadoop/tree/spike/override-datanode-id,] 
based on v3.2.2) that I have confirmed solves this problem for us. There's 
clearly some work to be done integrating this properly (such as using the 
regular Hadoop config system and falling back to the old behaviour if no 
override is configured). I'd be happy to take that on to the best of my ability 
(with the caveats that I'm not super familiar with the Hadoop codebase, and 
that my Java is quite rusty at this point) if the overall direction seems 
acceptable.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16525) System.err should be used when error occurs in multiple methods in DFSAdmin class

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16525?focusedWorklogId=768995&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-768995
 ]

ASF GitHub Bot logged work on HDFS-16525:
-

Author: ASF GitHub Bot
Created on: 11/May/22 11:01
Start Date: 11/May/22 11:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4122:
URL: https://github.com/apache/hadoop/pull/4122#issuecomment-1123579480

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 374m 53s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4122/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 17s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 494m  0s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestReplaceDatanodeFailureReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4122/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4122 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 298cebaa9c01 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 01f97992a8d10cc0742fdff911fa8dff01758ba5 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4122/2/testReport/ |
   | Max. process

[jira] [Work logged] (HDFS-16563) Namenode WebUI prints sensitve information on Token Expiry

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?focusedWorklogId=768978&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-768978
 ]

ASF GitHub Bot logged work on HDFS-16563:
-

Author: ASF GitHub Bot
Created on: 11/May/22 10:10
Start Date: 11/May/22 10:10
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #4241:
URL: https://github.com/apache/hadoop/pull/4241#issuecomment-1123505227

   LGTM; will leave final vote to @jojochuang 




Issue Time Tracking
---

Worklog Id: (was: 768978)
Time Spent: 2h 10m  (was: 2h)

> Namenode WebUI prints sensitve information on Token Expiry
> --
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}

2022-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14750?focusedWorklogId=768917&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-768917
 ]

ASF GitHub Bot logged work on HDFS-14750:
-

Author: ASF GitHub Bot
Created on: 11/May/22 09:12
Start Date: 11/May/22 09:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4199:
URL: https://github.com/apache/hadoop/pull/4199#issuecomment-1123407075

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  30m 43s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 6eb815a6f090 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8f31e0806dc88220c534a15bf74cfd599c6dbeca |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/testReport/ |
   | Max. process+thread count | 2223 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4199/6/console |