[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, after the datanode received the append request, 
the calculation of dfsused was incorrect when adding FinalizedBlock. The 
following is the dfsused after each append of 100
 |*Now*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, we found that the namenode page showed that the 
> datanode space had been used up, but the actual datanode machine still had a 
> lot of free space. After troubleshooting, after the datanode received the 
> append request, the calculation of dfsused was incorrect when adding 
> FinalizedBlock. The following is the dfsused after each append of 100
>  |*Now*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the calculation of dfsused was incorrect when 
adding FinalizedBlock. The following is the dfsused after each append of 100
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, after the datanode received the append request, 
the calculation of dfsused was incorrect when adding FinalizedBlock. The 
following is the dfsused after each append of 100
 |*Now*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, we found that the namenode page showed that the 
> datanode space had been used up, but the actual datanode machine still had a 
> lot of free space. After troubleshooting, the calculation of dfsused was 
> incorrect when adding FinalizedBlock. The following is the dfsused after each 
> append of 100
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the dfsused statistics of datanode are incorrect 
when appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the dfsused statistics of datanode are incorrect 
when appending a file. The following is the dfsused after each append of 100
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, we found that the namenode page showed that the 
> datanode space had been used up, but the actual datanode machine still had a 
> lot of free space. After troubleshooting, the dfsused statistics of datanode 
> are incorrect when appending a file. The following is the dfsused after each 
> append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the dfsused statistics of datanode are incorrect 
when appending a file. The following is the dfsused after each append of 100
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the calculation of dfsused was incorrect when 
adding FinalizedBlock. The following is the dfsused after each append of 100
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, we found that the namenode page showed that the 
> datanode space had been used up, but the actual datanode machine still had a 
> lot of free space. After troubleshooting, the dfsused statistics of datanode 
> are incorrect when appending a file. The following is the dfsused after each 
> append of 100
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, The namenode page showed that the datanode space had 
been used up, but the actual datanode machine still had a lot of free space. 
After troubleshooting, the dfsused statistics of datanode are incorrect when 
appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, we found that the namenode page showed that the datanode 
space had been used up, but the actual datanode machine still had a lot of free 
space. After troubleshooting, the dfsused statistics of datanode are incorrect 
when appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, The namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17456:
--
Description: 
In our production env, the namenode page showed that the datanode space had 
been used up, but the actual datanode machine still had a lot of free space. 
After troubleshooting, the dfsused statistics of datanode are incorrect when 
appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|

  was:
In our production env, The namenode page showed that the datanode space had 
been used up, but the actual datanode machine still had a lot of free space. 
After troubleshooting, the dfsused statistics of datanode are incorrect when 
appending a file. The following is the dfsused after each append of 100.
|*Error*|*Expect*|
|0|0|
|100|100|
|300|200|
|600|300|


> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834625#comment-17834625
 ] 

ASF GitHub Bot commented on HDFS-17456:
---

ZanderXu commented on code in PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#discussion_r1554836176


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java:
##
@@ -1291,7 +1291,9 @@ public ReplicaInPipeline append(String bpid, ReplicaInfo 
replicaInfo,
 
 // rename meta file to rbw directory
 // rename block file to rbw directory
+long oldReplicaLength = replicaInfo.getMetadataLength() + 
replicaInfo.getBlockDataLength();

Review Comment:
   `long oldReplicaLength = replicaInfo.getNumBytes() + 
replicaInfo.getMetadataLength();`





> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17456:
--
Labels: pull-request-available  (was: )

> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834628#comment-17834628
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

xleoken commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-2041355790

   cc @Hexiaoqiao @ZanderXu




> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
>  Labels: pull-request-available
> Attachments: hadoop.png
>
>
> Choose another DN as soon as possible, when encountering network issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834630#comment-17834630
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

ZanderXu commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2041357081

   EOF Exception is needed if the offset is bigger than file length.




> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at 
> org.apache.hadoop.hdfs.DFSInput

[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834631#comment-17834631
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

xleoken commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-2041359268

   cc @Hexiaoqiao @ZanderXu @slfan1989




> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
>  Labels: pull-request-available
> Attachments: hadoop.png
>
>
> Choose another DN as soon as possible, when encountering network issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834633#comment-17834633
 ] 

ASF GitHub Bot commented on HDFS-17456:
---

fuchaohong commented on code in PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#discussion_r1554843103


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java:
##
@@ -1291,7 +1291,9 @@ public ReplicaInPipeline append(String bpid, ReplicaInfo 
replicaInfo,
 
 // rename meta file to rbw directory
 // rename block file to rbw directory
+long oldReplicaLength = replicaInfo.getMetadataLength() + 
replicaInfo.getBlockDataLength();

Review Comment:
   Thanks @ZanderXu for your reviews, I have edited.





> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17411) [FGL] Client RPCs involving snapshot support fine-grained lock

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834640#comment-17834640
 ] 

ASF GitHub Bot commented on HDFS-17411:
---

ZanderXu opened a new pull request, #6714:
URL: https://github.com/apache/hadoop/pull/6714

   There are some client rpcs to handle snapshot.
   
   This ticket is used to make these RPCs supporting fine-grained locking.
   
   - getSnapshottableDirListing
   - getSnapshotListing
   - createSnapshot
   - deleteSnapshot
   - renameSnapshot
   - allowSnapshot
   - disallowSnapshot
   - getSnapshotDiffReport
   - getSnapshotDiffReportListing




> [FGL] Client RPCs involving snapshot support fine-grained lock
> --
>
> Key: HDFS-17411
> URL: https://issues.apache.org/jira/browse/HDFS-17411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> There are some client rpcs to handle snapshot.
> This ticket is used to make these RPCs supporting fine-grained locking.
>  * getSnapshottableDirListing
>  * getSnapshotListing
>  * createSnapshot
>  * deleteSnapshot
>  * renameSnapshot
>  * allowSnapshot
>  * disallowSnapshot
>  * getSnapshotDiffReport
>  * getSnapshotDiffReportListing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17411) [FGL] Client RPCs involving snapshot support fine-grained lock

2024-04-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17411:
--
Labels: pull-request-available  (was: )

> [FGL] Client RPCs involving snapshot support fine-grained lock
> --
>
> Key: HDFS-17411
> URL: https://issues.apache.org/jira/browse/HDFS-17411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some client rpcs to handle snapshot.
> This ticket is used to make these RPCs supporting fine-grained locking.
>  * getSnapshottableDirListing
>  * getSnapshotListing
>  * createSnapshot
>  * deleteSnapshot
>  * renameSnapshot
>  * allowSnapshot
>  * disallowSnapshot
>  * getSnapshotDiffReport
>  * getSnapshotDiffReportListing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17457) [FGL] UTs support fine-grained locking

2024-04-07 Thread ZanderXu (Jira)
ZanderXu created HDFS-17457:
---

 Summary: [FGL] UTs support fine-grained locking
 Key: HDFS-17457
 URL: https://issues.apache.org/jira/browse/HDFS-17457
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu
Assignee: ZanderXu


[FGL] UTs support fine-grained locking



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834641#comment-17834641
 ] 

ASF GitHub Bot commented on HDFS-17456:
---

fuchaohong commented on PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#issuecomment-2041384187

   @ZanderXu done the describe, thanks.




> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---
>
> Key: HDFS-17456
> URL: https://issues.apache.org/jira/browse/HDFS-17456
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.3
>Reporter: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834643#comment-17834643
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

hadoop-yetus commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2041386927

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   5m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 42s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  35m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   5m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 226m 32s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 413m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6710 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 924ead2f0057 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2c9a6fbbebdb0d1beecee5000b3ee926ea93a4b8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://

[jira] [Updated] (HDFS-17445) [FGL] All remaining operations support fine-grained locking

2024-04-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17445:
--
Labels: pull-request-available  (was: )

> [FGL] All remaining operations support fine-grained locking
> ---
>
> Key: HDFS-17445
> URL: https://issues.apache.org/jira/browse/HDFS-17445
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17445) [FGL] All remaining operations support fine-grained locking

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834653#comment-17834653
 ] 

ASF GitHub Bot commented on HDFS-17445:
---

ZanderXu opened a new pull request, #6715:
URL: https://github.com/apache/hadoop/pull/6715

   [FGL] All remaining operations support fine-grained locking.




> [FGL] All remaining operations support fine-grained locking
> ---
>
> Key: HDFS-17445
> URL: https://issues.apache.org/jira/browse/HDFS-17445
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834669#comment-17834669
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041450186

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 268m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 426m 44s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 257d69861710 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1a04adda0ac48a99a5cfd06cba3a84a54ea13757 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/6/testReport/ |
   | Max. process+thread count | 2445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/

[jira] [Commented] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834677#comment-17834677
 ] 

ASF GitHub Bot commented on HDFS-17456:
---

hadoop-yetus commented on PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#issuecomment-2041467867

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 232m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 386m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6713 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 30389c7458ec 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bce57edfd65307680ee3ea76e85b7020c048ea2c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/1/testReport/ |
   | Max. process+thread count | 3540 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix the dfsused statisti

[jira] [Commented] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834684#comment-17834684
 ] 

ASF GitHub Bot commented on HDFS-17456:
---

hadoop-yetus commented on PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#issuecomment-2041485037

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 227m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 368m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6713 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 07fcd1da07a1 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ecf2c6ae3012d4fe3ac718774e203062dc64b85f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/2/testReport/ |
   | Max. process+thread count | 4502 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6713/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix the dfsused statist

[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834688#comment-17834688
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041497666

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 277m 59s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 437m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cad6a7ee4c8f 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9175c03e59c15cccbc5166645b9e7c203fab19cc |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/7/testReport/ |
   | Max. process+thread count | 3340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus

[jira] [Commented] (HDFS-17411) [FGL] Client RPCs involving snapshot support fine-grained lock

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834697#comment-17834697
 ] 

ASF GitHub Bot commented on HDFS-17411:
---

hadoop-yetus commented on PR #6714:
URL: https://github.com/apache/hadoop/pull/6714#issuecomment-2041513520

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 42s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6714/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 110 unchanged 
- 0 fixed = 112 total (was 110)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 231m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6714/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 391m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestLeaseManager |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6714/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6714 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8800e9cfbfa2 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 607248b09ec5b002ed4fc1c0cb25f3bfe913c589 |
   | Defa

[jira] [Commented] (HDFS-17445) [FGL] All remaining operations support fine-grained locking

2024-04-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834709#comment-17834709
 ] 

ASF GitHub Bot commented on HDFS-17445:
---

hadoop-yetus commented on PR #6715:
URL: https://github.com/apache/hadoop/pull/6715#issuecomment-2041536450

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 57s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6715/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 411 unchanged 
- 5 fixed = 416 total (was 416)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 258m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6715/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 432m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.namenode.TestLeaseManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6715/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6715 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1e4bcd529672 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 362c7193d0001a9dd957333cc5c0b463ea0f289f |
   | Default Java | Private Build-1.8.0_40