[jira] [Updated] (HDFS-17303) Make WINDOW_SIZE and NUM_WINDOWS configurable.

2024-05-06 Thread Zhaobo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhaobo Huang updated HDFS-17303:

Description: 
1. The delay reported by DN to NN is an average delay of 3 hours.
MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT = 
36
2. Can changeing this window size and num to configurable param?

  was:
1. The delay reported by DN to NN is an average delay of 3 hours, which 
confuses me.
MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT = 
36
2. There is a time limit for SlowNodes collected by nn, which is currently set 
to 5ms by default (dfs.datanode.slowpeer.low.threshold.ms), while the time 
threshold for printing SlowNode logs written downstream is 300ms 
(dfs.datanode.low.io.warning.threshold.ms).
3. Can changeing this window size and num to configurable param?


> Make WINDOW_SIZE and NUM_WINDOWS configurable.
> --
>
> Key: HDFS-17303
> URL: https://issues.apache.org/jira/browse/HDFS-17303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhaobo Huang
>Assignee: Zhaobo Huang
>Priority: Major
>  Labels: pull-request-available
>
> 1. The delay reported by DN to NN is an average delay of 3 hours.
> MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT 
> = 36
> 2. Can changeing this window size and num to configurable param?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17303) Make WINDOW_SIZE and NUM_WINDOWS configurable.

2024-05-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17303:
--
Labels: pull-request-available  (was: )

> Make WINDOW_SIZE and NUM_WINDOWS configurable.
> --
>
> Key: HDFS-17303
> URL: https://issues.apache.org/jira/browse/HDFS-17303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhaobo Huang
>Assignee: Zhaobo Huang
>Priority: Major
>  Labels: pull-request-available
>
> 1. The delay reported by DN to NN is an average delay of 3 hours, which 
> confuses me.
> MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT 
> = 36
> 2. There is a time limit for SlowNodes collected by nn, which is currently 
> set to 5ms by default (dfs.datanode.slowpeer.low.threshold.ms), while the 
> time threshold for printing SlowNode logs written downstream is 300ms 
> (dfs.datanode.low.io.warning.threshold.ms).
> 3. Can changeing this window size and num to configurable param?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17303) Make WINDOW_SIZE and NUM_WINDOWS configurable.

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844139#comment-17844139
 ] 

ASF GitHub Bot commented on HDFS-17303:
---

huangzhaobo99 opened a new pull request, #6801:
URL: https://github.com/apache/hadoop/pull/6801

   
   
   ### Description of PR
   
   1. The delay reported by DN to NN is an average delay of 3 hours.
   MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT 
= 36
   2. Changeing this window size and num to configurable param.
   
   ### How was this patch tested?
   Add UT




> Make WINDOW_SIZE and NUM_WINDOWS configurable.
> --
>
> Key: HDFS-17303
> URL: https://issues.apache.org/jira/browse/HDFS-17303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhaobo Huang
>Assignee: Zhaobo Huang
>Priority: Major
>
> 1. The delay reported by DN to NN is an average delay of 3 hours, which 
> confuses me.
> MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT 
> = 36
> 2. There is a time limit for SlowNodes collected by nn, which is currently 
> set to 5ms by default (dfs.datanode.slowpeer.low.threshold.ms), while the 
> time threshold for printing SlowNode logs written downstream is 300ms 
> (dfs.datanode.low.io.warning.threshold.ms).
> 3. Can changeing this window size and num to configurable param?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17303) Make WINDOW_SIZE and NUM_WINDOWS configurable.

2024-05-06 Thread Zhaobo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhaobo Huang reassigned HDFS-17303:
---

Assignee: Zhaobo Huang

> Make WINDOW_SIZE and NUM_WINDOWS configurable.
> --
>
> Key: HDFS-17303
> URL: https://issues.apache.org/jira/browse/HDFS-17303
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhaobo Huang
>Assignee: Zhaobo Huang
>Priority: Major
>
> 1. The delay reported by DN to NN is an average delay of 3 hours, which 
> confuses me.
> MutableRollingAverages: WINDOW_SIZE_MS_DEFAULT = 300_000 NUM_WINDOWS_DEFAULT 
> = 36
> 2. There is a time limit for SlowNodes collected by nn, which is currently 
> set to 5ms by default (dfs.datanode.slowpeer.low.threshold.ms), while the 
> time threshold for printing SlowNode logs written downstream is 300ms 
> (dfs.datanode.low.io.warning.threshold.ms).
> 3. Can changeing this window size and num to configurable param?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2024-05-06 Thread WenjingLiu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844105#comment-17844105
 ] 

WenjingLiu commented on HDFS-14646:
---

Hi, [~hemanthboyina] . In our test cluster, we also came across the same issue 
and discovered that the code "nnImage.purgeOldStorage" was removed in 002.patch 
as compared to 001.patch. As a result, ANN does not remove old fsimage files, 
which might not be suitable in certain situations. We are curious if there is a 
specific reason behind the necessity of removing the "nnImage.purgeOldStorage" 
code?

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch, 
> HDFS-14646.002.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences : 
> *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty 
> 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*

[jira] [Commented] (HDFS-17384) [FGL] Replace the global lock with global FS Lock and global BM lock

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844096#comment-17844096
 ] 

ASF GitHub Bot commented on HDFS-17384:
---

ZanderXu commented on code in PR #6762:
URL: https://github.com/apache/hadoop/pull/6762#discussion_r1591744778


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6043,7 +6079,7 @@ void updatePipeline(
   updatePipelineInternal(clientName, oldBlock, newBlock, newNodes,
   newStorageIDs, logRetryCache);
 } finally {
-  writeUnlock("updatePipeline");
+  writeUnlock(FSNamesystemLockMode.GLOBAL, "updatePipeline");

Review Comment:
   Yes, this RPC involves iNode and FSEditLog, so we use global lock here.





> [FGL] Replace the global lock with global FS Lock and global BM lock
> 
>
> Key: HDFS-17384
> URL: https://issues.apache.org/jira/browse/HDFS-17384
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: FGL, pull-request-available
>
> First, we can replace the current global lock with two locks, global FS lock 
> and global BM lock.
> The global FS lock is used to make directory tree-related operations 
> thread-safe.
> The global BM lock is used to make block-related operations and DN-related 
> operations thread-safe.
>  
> For some operations involving both directory tree and block or DN, the global 
> FS lock and the global BM lock are acquired.
>  
> The lock order should be:
>  * The global FS lock
>  * The global BM lock
>  
> There are some special requirements for this ticket.
>  * End-user can choose to use global lock or fine-grained lock through 
> configuration.
>  * Try not to modify the current implementation logic as much as possible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17464) Improve some logs output in class FsDatasetImpl

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844084#comment-17844084
 ] 

ASF GitHub Bot commented on HDFS-17464:
---

hfutatzhanghb commented on code in PR #6724:
URL: https://github.com/apache/hadoop/pull/6724#discussion_r1591724020


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##
@@ -2028,7 +2028,7 @@ private ReplicaInfo finalizeReplica(String bpid, 
ReplicaInfo replicaInfo)
   } else {
 FsVolumeImpl v = (FsVolumeImpl)replicaInfo.getVolume();
 if (v == null) {
-  throw new IOException("No volume for block " + replicaInfo);
+  throw new IOException("No volume for bpid: " + bpid + " , block: " + 
replicaInfo);

Review Comment:
   @haiyang1987 Sir, thanks for your careful review. Have fixed.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##
@@ -2016,7 +2016,7 @@ private ReplicaInfo finalizeReplica(String bpid, 
ReplicaInfo replicaInfo)
   if (volumeMap.get(bpid, replicaInfo.getBlockId()).getGenerationStamp()
   > replicaInfo.getGenerationStamp()) {
 throw new IOException("Generation Stamp should be monotonically "
-+ "increased.");
++ "increased. bpid: " + bpid + " , block: " + replicaInfo);

Review Comment:
   done





> Improve some logs output in class FsDatasetImpl
> ---
>
> Key: HDFS-17464
> URL: https://issues.apache.org/jira/browse/HDFS-17464
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17464) Improve some logs output in class FsDatasetImpl

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844080#comment-17844080
 ] 

ASF GitHub Bot commented on HDFS-17464:
---

haiyang1987 commented on PR #6724:
URL: https://github.com/apache/hadoop/pull/6724#issuecomment-2097225735

   LGTM.  Leave some small comments. Thanks.




> Improve some logs output in class FsDatasetImpl
> ---
>
> Key: HDFS-17464
> URL: https://issues.apache.org/jira/browse/HDFS-17464
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17488) DN can fail IBRs with NPE when a volume is removed

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844081#comment-17844081
 ] 

ASF GitHub Bot commented on HDFS-17488:
---

haiyang1987 commented on PR #6759:
URL: https://github.com/apache/hadoop/pull/6759#issuecomment-2097227087

   LGTM.
   
   Hi @Hexiaoqiao @hfutatzhanghb any other comments?




> DN can fail IBRs with NPE when a volume is removed
> --
>
> Key: HDFS-17488
> URL: https://issues.apache.org/jira/browse/HDFS-17488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Felix N
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
>
>  
> Error logs
> {code:java}
> 2024-04-22 15:46:33,422 [BP-1842952724-10.22.68.249-1713771988830 
> heartbeating to localhost/127.0.0.1:64977] ERROR datanode.DataNode 
> (BPServiceActor.java:run(922)) - Exception in BPOfferService for Block pool 
> BP-1842952724-10.22.68.249-1713771988830 (Datanode Uuid 
> 1659ffaf-1a80-4a8e-a542-643f6bd97ed4) service to localhost/127.0.0.1:64977
> java.lang.NullPointerException
>     at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReceivedAndDeleted(DatanodeProtocolClientSideTranslatorPB.java:246)
>     at 
> org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.sendIBRs(IncrementalBlockReportManager.java:218)
>     at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:749)
>     at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:920)
>     at java.lang.Thread.run(Thread.java:748) {code}
> The root cause is in BPOfferService#notifyNamenodeBlock, happens when it's 
> called on a block belonging to a volume already removed prior. Because the 
> volume was already removed
>  
> {code:java}
> private void notifyNamenodeBlock(ExtendedBlock block, BlockStatus status,
> String delHint, String storageUuid, boolean isOnTransientStorage) {
>   checkBlock(block);
>   final ReceivedDeletedBlockInfo info = new ReceivedDeletedBlockInfo(
>   block.getLocalBlock(), status, delHint);
>   final DatanodeStorage storage = dn.getFSDataset().getStorage(storageUuid);
>   
>   // storage == null here because it's already removed earlier.
>   for (BPServiceActor actor : bpServices) {
> actor.getIbrManager().notifyNamenodeBlock(info, storage,
> isOnTransientStorage);
>   }
> } {code}
> so IBRs with a null storage are now pending.
> The reason why notifyNamenodeBlock can trigger on such blocks is up in 
> DirectoryScanner#reconcile
> {code:java}
>   public void reconcile() throws IOException {
>     LOG.debug("reconcile start DirectoryScanning");
>     scan();
> // If a volume is removed here after scan() already finished running,
> // diffs is stale and checkAndUpdate will run on a removed volume
>     // HDFS-14476: run checkAndUpdate with batch to avoid holding the lock too
>     // long
>     int loopCount = 0;
>     synchronized (diffs) {
>       for (final Map.Entry entry : diffs.getEntries()) {
>         dataset.checkAndUpdate(entry.getKey(), entry.getValue());        
>     ...
>   } {code}
> Inside checkAndUpdate, memBlockInfo is null because all the block meta in 
> memory is removed during the volume removal, but diskFile still exists. Then 
> DataNode#notifyNamenodeDeletedBlock (and further down the line, 
> notifyNamenodeBlock) is called on this block.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17464) Improve some logs output in class FsDatasetImpl

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844079#comment-17844079
 ] 

ASF GitHub Bot commented on HDFS-17464:
---

haiyang1987 commented on code in PR #6724:
URL: https://github.com/apache/hadoop/pull/6724#discussion_r1591717049


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##
@@ -2028,7 +2028,7 @@ private ReplicaInfo finalizeReplica(String bpid, 
ReplicaInfo replicaInfo)
   } else {
 FsVolumeImpl v = (FsVolumeImpl)replicaInfo.getVolume();
 if (v == null) {
-  throw new IOException("No volume for block " + replicaInfo);
+  throw new IOException("No volume for bpid: " + bpid + " , block: " + 
replicaInfo);

Review Comment:
 throw new IOException("No volume for bpid: " + bpid + ", block: " 
+ replicaInfo);



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##
@@ -2016,7 +2016,7 @@ private ReplicaInfo finalizeReplica(String bpid, 
ReplicaInfo replicaInfo)
   if (volumeMap.get(bpid, replicaInfo.getBlockId()).getGenerationStamp()
   > replicaInfo.getGenerationStamp()) {
 throw new IOException("Generation Stamp should be monotonically "
-+ "increased.");
++ "increased. bpid: " + bpid + " , block: " + replicaInfo);

Review Comment:
"increased bpid: " + bpid + ", block: " + replicaInfo);





> Improve some logs output in class FsDatasetImpl
> ---
>
> Key: HDFS-17464
> URL: https://issues.apache.org/jira/browse/HDFS-17464
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843819#comment-17843819
 ] 

ASF GitHub Bot commented on HDFS-17510:
---

hadoop-yetus commented on PR #6798:
URL: https://github.com/apache/hadoop/pull/6798#issuecomment-2096621874

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  19m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  19m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 252m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b6229c4986c0 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a3f9441aa4aa3cbea41b01be90b4a95881cef2 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/2/testReport/ |
   | Max. process+thread count | 1270 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Change of Code

[jira] [Commented] (HDFS-17512) dumpXattrs logic optimization

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843757#comment-17843757
 ] 

ASF GitHub Bot commented on HDFS-17512:
---

hadoop-yetus commented on PR #6800:
URL: https://github.com/apache/hadoop/pull/6800#issuecomment-2096156223

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 28s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 28s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 28s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 30s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 6 unchanged - 
0 fixed = 7 total (was 6)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6800/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  | 

[jira] [Commented] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843749#comment-17843749
 ] 

ASF GitHub Bot commented on HDFS-17510:
---

hadoop-yetus commented on PR #6798:
URL: https://github.com/apache/hadoop/pull/6798#issuecomment-2096013908

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  51m 17s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  19m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  19m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  20m 16s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 252m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.io.compress.TestCodecPool |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dca76a9ce647 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c3cf5364f23b6f0edc21fd995c7ae7782763325e |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/1/testReport/ |
   | Max. process+thread count | 3

[jira] [Updated] (HDFS-17512) dumpXattrs logic optimization

2024-05-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17512:
--
Labels: pull-request-available  (was: )

> dumpXattrs logic optimization
> -
>
> Key: HDFS-17512
> URL: https://issues.apache.org/jira/browse/HDFS-17512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.3.3
>Reporter: Yukun Zhang
>Priority: Minor
>  Labels: pull-request-available
>
> The dumpXattrs logic in VIO should use 
> FSImageFormatPBINode.Loader.loadXAttrs() to get the Xattrs attribute for easy 
> maintenance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17512) dumpXattrs logic optimization

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843743#comment-17843743
 ] 

ASF GitHub Bot commented on HDFS-17512:
---

YaAYadeer opened a new pull request, #6800:
URL: https://github.com/apache/hadoop/pull/6800

   
   
   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17512
   
   




> dumpXattrs logic optimization
> -
>
> Key: HDFS-17512
> URL: https://issues.apache.org/jira/browse/HDFS-17512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.3.3
>Reporter: Yukun Zhang
>Priority: Minor
>
> The dumpXattrs logic in VIO should use 
> FSImageFormatPBINode.Loader.loadXAttrs() to get the Xattrs attribute for easy 
> maintenance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17512) dumpXattrs logic optimization

2024-05-06 Thread Yukun Zhang (Jira)
Yukun Zhang created HDFS-17512:
--

 Summary: dumpXattrs logic optimization
 Key: HDFS-17512
 URL: https://issues.apache.org/jira/browse/HDFS-17512
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.3.3, 3.2.0
Reporter: Yukun Zhang


The dumpXattrs logic in VIO should use FSImageFormatPBINode.Loader.loadXAttrs() 
to get the Xattrs attribute for easy maintenance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17511) method storagespaceConsumedContiguous should use BlockInfo#getReplication to compute dsDelta

2024-05-06 Thread farmmamba (Jira)
farmmamba created HDFS-17511:


 Summary: method storagespaceConsumedContiguous should use 
BlockInfo#getReplication to compute dsDelta
 Key: HDFS-17511
 URL: https://issues.apache.org/jira/browse/HDFS-17511
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: farmmamba
Assignee: farmmamba


As title says, we should use BlockInfo#getReplication to compute storage space 
in method INodeFile#storagespaceConsumedContiguous.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17510:
--
Labels: pull-request-available  (was: )

> Change of Codec configuration does not work
> ---
>
> Key: HDFS-17510
> URL: https://issues.apache.org/jira/browse/HDFS-17510
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843687#comment-17843687
 ] 

ASF GitHub Bot commented on HDFS-17510:
---

skyskyhu opened a new pull request, #6798:
URL: https://github.com/apache/hadoop/pull/6798

   HDFS-17510 Change of Codec configuration does not work
   
   ### Description of PR
   In one of my projects, I need to dynamically adjust compression level for 
different files. 
   However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.
   
   Here is the relevant code snippet:
   ZStandardCodec zStandardCodec = new ZStandardCodec();
   zStandardCodec.setConf(conf);
   conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
   conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
   writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
   
SequenceFile.Writer.keyClass(LongWritable.class),
   
SequenceFile.Writer.valueClass(BytesWritable.class),
   
SequenceFile.Writer.compression(CompressionType.BLOCK));
   
   The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
   If the compressor is a reused instance, the conf is not applied because it 
is passed as null:
   public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
   Compressor compressor = borrow(compressorPool, codec.getCompressorType());
   if (compressor == null)
   
   { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
["+codec.getDefaultExtension()+"]"); }
   else {
   compressor.reinit(conf);   //conf is null here
   ..
   
   Please also refer to my unit test to reproduce the bug. 
   To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.
   
   ### How was this patch tested?
   unit test 
   
   ### For code changes:
   
   - [Y] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Change of Codec configuration does not work
> ---
>
> Key: HDFS-17510
> URL: https://issues.apache.org/jira/browse/HDFS-17510
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubs

[jira] [Updated] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread Zhikai Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhikai Hu updated HDFS-17510:
-
Description: 
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.

Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
Compressor compressor = borrow(compressorPool, codec.getCompressorType());
if (compressor == null) {
compressor = codec.createCompressor();
LOG.info("Got brand-new compressor ["+codec.getDefaultExtension()+"]");
} else {
compressor.reinit(conf);   //conf is null here
..

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.

  was:
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.

Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
  Compressor compressor = borrow(compressorPool, codec.getCompressorType());
  if (compressor == null)

{     compressor = codec.createCompressor();     LOG.info("Got brand-new 
compressor ["+codec.getDefaultExtension()+"]");   }

else {
    compressor.reinit(conf); // conf is null here
    if(LOG.isDebugEnabled())

{         LOG.debug("Got recycled compressor");     }

  }

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.


> Change of Codec configuration does not work
> ---
>
> Key: HDFS-17510
> URL: https://issues.apache.org/jira/browse/HDFS-17510
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf 

[jira] [Updated] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread Zhikai Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhikai Hu updated HDFS-17510:
-
Description: 
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.

Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));

The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
Compressor compressor = borrow(compressorPool, codec.getCompressorType());
if (compressor == null)

{ compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
["+codec.getDefaultExtension()+"]"); }

else {
compressor.reinit(conf);   //conf is null here
..

 

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.

  was:
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.

Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
Compressor compressor = borrow(compressorPool, codec.getCompressorType());
if (compressor == null) {
compressor = codec.createCompressor();
LOG.info("Got brand-new compressor ["+codec.getDefaultExtension()+"]");
} else {
compressor.reinit(conf);   //conf is null here
..

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.


> Change of Codec configuration does not work
> ---
>
> Key: HDFS-17510
> URL: https://issues.apache.org/jira/browse/HDFS-17510
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(Compre

[jira] [Updated] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread Zhikai Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhikai Hu updated HDFS-17510:
-
Description: 
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.

Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
  Compressor compressor = borrow(compressorPool, codec.getCompressorType());
  if (compressor == null)

{     compressor = codec.createCompressor();     LOG.info("Got brand-new 
compressor ["+codec.getDefaultExtension()+"]");   }

else {
    compressor.reinit(conf); // conf is null here
    if(LOG.isDebugEnabled())

{         LOG.debug("Got recycled compressor");     }

  }

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.

  was:
In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.
Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
  Compressor compressor = borrow(compressorPool, codec.getCompressorType());
  if (compressor == null) {
    compressor = codec.createCompressor();
    LOG.info("Got brand-new compressor ["+codec.getDefaultExtension()+"]");
  } else {
    compressor.reinit(conf); // conf is null here
    if(LOG.isDebugEnabled()) {
        LOG.debug("Got recycled compressor");
    }
  }

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.


> Change of Codec configuration does not work
> ---
>
> Key: HDFS-17510
> URL: https://issues.apache.org/jira/browse/HDFS-17510
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getC

[jira] [Created] (HDFS-17510) Change of Codec configuration does not work

2024-05-06 Thread Zhikai Hu (Jira)
Zhikai Hu created HDFS-17510:


 Summary: Change of Codec configuration does not work
 Key: HDFS-17510
 URL: https://issues.apache.org/jira/browse/HDFS-17510
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: compress
Reporter: Zhikai Hu


In one of my projects, I need to dynamically adjust compression level for 
different files. 
However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.
Here is the relevant code snippet:
ZStandardCodec zStandardCodec = new ZStandardCodec();
zStandardCodec.setConf(conf);
conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
                                
SequenceFile.Writer.keyClass(LongWritable.class),
                                
SequenceFile.Writer.valueClass(BytesWritable.class),
                                
SequenceFile.Writer.compression(CompressionType.BLOCK));
The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec, null) to get a compressor. 
If the compressor is a reused instance, the conf is not applied because it is 
passed as null:
public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
  Compressor compressor = borrow(compressorPool, codec.getCompressorType());
  if (compressor == null) {
    compressor = codec.createCompressor();
    LOG.info("Got brand-new compressor ["+codec.getDefaultExtension()+"]");
  } else {
    compressor.reinit(conf); // conf is null here
    if(LOG.isDebugEnabled()) {
        LOG.debug("Got recycled compressor");
    }
  }

Please also refer to my unit test to reproduce the bug. 
To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17509) RBF: Fix ClientProtocol.concat will throw NPE if tgr is a empty file.

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843680#comment-17843680
 ] 

ASF GitHub Bot commented on HDFS-17509:
---

LiuGuH commented on PR #6784:
URL: https://github.com/apache/hadoop/pull/6784#issuecomment-2095466774

   @goiri  Hi,sir. Do you have time to review this PR, thanks!




> RBF: Fix ClientProtocol.concat  will throw NPE if tgr is a empty file.
> --
>
> Key: HDFS-17509
> URL: https://issues.apache.org/jira/browse/HDFS-17509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>  Labels: pull-request-available
>
> hdfs dfs -concat  /tmp/merge /tmp/t1 /tmp/t2
> When /tmp/merge is a empty file, this command will throw NPE via DFSRouter. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17486) VIO: dumpXattrs logic optimization

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17843656#comment-17843656
 ] 

ASF GitHub Bot commented on HDFS-17486:
---

hadoop-yetus commented on PR #6797:
URL: https://github.com/apache/hadoop/pull/6797#issuecomment-2095333179

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  55m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  49m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m  0s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   1m  5s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   1m  5s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 59s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 59s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 13s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 6 unchanged - 
0 fixed = 8 total (was 6)  |
   | -1 :x: |  mvnsite  |   1m  5s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6797/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |