(hadoop) branch trunk updated: HADOOP-19180. EC: Fix calculation errors caused by special index order (#6813). Contributed by zhengchenyu.

2024-08-18 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e5b76dc99fd HADOOP-19180. EC: Fix calculation errors caused by special 
index order (#6813).  Contributed by zhengchenyu.
e5b76dc99fd is described below

commit e5b76dc99fdc7c9a3fc2132873eb4ef3e545bb4f
Author: zhengchenyu 
AuthorDate: Mon Aug 19 12:40:45 2024 +0800

HADOOP-19180. EC: Fix calculation errors caused by special index order 
(#6813).  Contributed by zhengchenyu.

Reviewed-by: He Xiaoqiao 
Signed-off-by: Shuyan Zhang 
---
 .../io/erasurecode/rawcoder/RSRawDecoder.java  |  32 +++---
 .../apache/hadoop/io/erasurecode/erasure_coder.c   |  36 +++
 .../apache/hadoop/io/erasurecode/erasure_coder.h   |   1 -
 .../hadoop/io/erasurecode/erasure_code_test.c  |  80 ++-
 .../TestErasureCodingEncodeAndDecode.java  | 108 +
 5 files changed, 195 insertions(+), 62 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
index d7f78abc050..824e701c71f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
@@ -51,7 +51,6 @@ public class RSRawDecoder extends RawErasureDecoder {
   private byte[] gfTables;
   private int[] cachedErasedIndexes;
   private int[] validIndexes;
-  private int numErasedDataUnits;
   private boolean[] erasureFlags;
 
   public RSRawDecoder(ErasureCoderOptions coderOptions) {
@@ -120,14 +119,10 @@ public class RSRawDecoder extends RawErasureDecoder {
 this.gfTables = new byte[getNumAllUnits() * getNumDataUnits() * 32];
 
 this.erasureFlags = new boolean[getNumAllUnits()];
-this.numErasedDataUnits = 0;
 
 for (int i = 0; i < erasedIndexes.length; i++) {
   int index = erasedIndexes[i];
   erasureFlags[index] = true;
-  if (index < getNumDataUnits()) {
-numErasedDataUnits++;
-  }
 }
 
 generateDecodeMatrix(erasedIndexes);
@@ -156,21 +151,22 @@ public class RSRawDecoder extends RawErasureDecoder {
 
 GF256.gfInvertMatrix(tmpMatrix, invertMatrix, getNumDataUnits());
 
-for (i = 0; i < numErasedDataUnits; i++) {
-  for (j = 0; j < getNumDataUnits(); j++) {
-decodeMatrix[getNumDataUnits() * i + j] =
-invertMatrix[getNumDataUnits() * erasedIndexes[i] + j];
-  }
-}
-
-for (p = numErasedDataUnits; p < erasedIndexes.length; p++) {
-  for (i = 0; i < getNumDataUnits(); i++) {
-s = 0;
+for (p = 0; p < erasedIndexes.length; p++) {
+  int erasedIndex = erasedIndexes[p];
+  if (erasedIndex < getNumDataUnits()) {
 for (j = 0; j < getNumDataUnits(); j++) {
-  s ^= GF256.gfMul(invertMatrix[j * getNumDataUnits() + i],
-  encodeMatrix[getNumDataUnits() * erasedIndexes[p] + j]);
+  decodeMatrix[getNumDataUnits() * p + j] =
+  invertMatrix[getNumDataUnits() * erasedIndexes[p] + j];
+}
+  } else {
+for (i = 0; i < getNumDataUnits(); i++) {
+  s = 0;
+  for (j = 0; j < getNumDataUnits(); j++) {
+s ^= GF256.gfMul(invertMatrix[j * getNumDataUnits() + i],
+encodeMatrix[getNumDataUnits() * erasedIndexes[p] + j]);
+  }
+  decodeMatrix[getNumDataUnits() * p + i] = s;
 }
-decodeMatrix[getNumDataUnits() * p + i] = s;
   }
 }
   }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_coder.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_coder.c
index b2d856b6f88..e7ea07af4ca 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_coder.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/erasurecode/erasure_coder.c
@@ -132,9 +132,6 @@ static int processErasures(IsalDecoder* pCoder, unsigned 
char** inputs,
 index = erasedIndexes[i];
 pCoder->erasedIndexes[i] = index;
 pCoder->erasureFlags[index] = 1;
-if (index < numDataUnits) {
-  pCoder->numErasedDataUnits++;
-}
   }
 
   pCoder->numErased = numErased;
@@ -175,7 +172,6 @@ int decode(IsalDecoder* pCoder, unsigned char** inputs,
 
 // Clear variables used per decode call
 void clearDecoder(IsalDecoder* decoder) {
-  decoder->numErasedDataUnits = 0;
   decoder->numErased = 0;
   memset(decoder->gftbls, 0, sizeof(decoder->

(hadoop) branch trunk updated: HDFS-17383:Datanode current block token should come from active NameNode in HA mode (#6562). Contributed by lei w.

2024-04-15 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f49a4df797dd HDFS-17383:Datanode current block token should come from 
active NameNode in HA mode (#6562). Contributed by lei w.
f49a4df797dd is described below

commit f49a4df797dd91e8efb35ca882e4dbcb733bf6ed
Author: Lei313 <47049042+thinker...@users.noreply.github.com>
AuthorDate: Mon Apr 15 18:35:53 2024 +0800

HDFS-17383:Datanode current block token should come from active NameNode in 
HA mode (#6562). Contributed by lei w.

Reviewed-by: Shuyan Zhang 
Signed-off-by: Shuyan Zhang 
---
 .../token/block/BlockPoolTokenSecretManager.java   |   6 +-
 .../token/block/BlockTokenSecretManager.java   |  12 +-
 .../hdfs/server/blockmanagement/BlockManager.java  |   2 +-
 .../server/blockmanagement/DatanodeManager.java|   7 +-
 .../hdfs/server/datanode/BPOfferService.java   |   8 +-
 .../hdfs/server/datanode/BPServiceActor.java   |   1 +
 .../server/datanode/metrics/DataNodeMetrics.java   |   2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |   2 +-
 .../java/org/apache/hadoop/hdfs/DFSTestUtil.java   |   2 +-
 .../java/org/apache/hadoop/hdfs/TestGetBlocks.java |   2 +-
 .../hdfs/security/token/block/TestBlockToken.java  |   4 +-
 .../token/block/TestUpdateDataNodeCurrentKey.java  | 122 +
 .../server/blockmanagement/TestBlockManager.java   |   2 +-
 .../TestCorruptionWithFailover.java|   4 +-
 14 files changed, 155 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
index 980b474c9924..31e4bd4a49ed 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockPoolTokenSecretManager.java
@@ -141,9 +141,9 @@ public class BlockPoolTokenSecretManager extends
   /**
* See {@link BlockTokenSecretManager#addKeys(ExportedBlockKeys)}.
*/
-  public void addKeys(String bpid, ExportedBlockKeys exportedKeys)
-  throws IOException {
-get(bpid).addKeys(exportedKeys);
+  public void addKeys(String bpid, ExportedBlockKeys exportedKeys,
+  boolean updateCurrentKey) throws IOException {
+get(bpid).addKeys(exportedKeys, updateCurrentKey);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
index b9f817db519f..d2c3fe6eae9a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
@@ -218,17 +218,23 @@ public class BlockTokenSecretManager extends
 }
   }
 
+  public synchronized void addKeys(ExportedBlockKeys exportedKeys) throws 
IOException {
+addKeys(exportedKeys, true);
+  }
+
   /**
* Set block keys, only to be used in worker mode
*/
-  public synchronized void addKeys(ExportedBlockKeys exportedKeys)
-  throws IOException {
+  public synchronized void addKeys(ExportedBlockKeys exportedKeys,
+  boolean updateCurrentKey) throws IOException {
 if (isMaster || exportedKeys == null) {
   return;
 }
 LOG.info("Setting block keys. BlockPool = {} .", blockPoolId);
 removeExpiredKeys();
-this.currentKey = exportedKeys.getCurrentKey();
+if (updateCurrentKey || currentKey == null) {
+  this.currentKey = exportedKeys.getCurrentKey();
+}
 BlockKey[] receivedKeys = exportedKeys.getAllKeys();
 for (int i = 0; i < receivedKeys.length; i++) {
   if (receivedKeys[i] != null) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 56c86482bd17..f0c88f375598 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -789,7 +789,7 @@ public class BlockManager implements BlockStatsMXBean {
 checkNSRunning = false;
   }
 
-  private boolean isBlockTokenEnabled() {
+  protected boolean isBlockTokenEnab

(hadoop) branch trunk updated: HDFS-17408:Reduce the number of quota calculations in FSDirRenameOp (#6653). Contributed by lei w.

2024-04-01 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 36c22400b223 HDFS-17408:Reduce the number of quota calculations in 
FSDirRenameOp (#6653). Contributed by lei w.
36c22400b223 is described below

commit 36c22400b22301c0cf35fd68ae18f2084bb0f25f
Author: Lei313 <47049042+thinker...@users.noreply.github.com>
AuthorDate: Tue Apr 2 10:40:28 2024 +0800

HDFS-17408:Reduce the number of quota calculations in FSDirRenameOp 
(#6653). Contributed by lei w.

Reviewed-by: He Xiaoqiao 
Reviewed-by: Dinesh Chitlangia 
Signed-off-by: Shuyan Zhang 
---
 .../hadoop/hdfs/server/namenode/FSDirMkdirOp.java  |   6 +-
 .../hadoop/hdfs/server/namenode/FSDirRenameOp.java | 100 ++---
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  28 ++--
 .../TestCorrectnessOfQuotaAfterRenameOp.java   | 161 +
 .../namenode/snapshot/TestRenameWithSnapshots.java |   2 +-
 5 files changed, 261 insertions(+), 36 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
index 862880d95b2d..0d7f3b202a0b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
@@ -36,6 +36,8 @@ import org.apache.hadoop.security.AccessControlException;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Optional;
+
 import static org.apache.hadoop.util.Time.now;
 
 class FSDirMkdirOp {
@@ -221,8 +223,8 @@ class FSDirMkdirOp {
 final INodeDirectory dir = new INodeDirectory(inodeId, name, permission,
 timestamp);
 
-INodesInPath iip =
-fsd.addLastINode(parent, dir, permission.getPermission(), true);
+INodesInPath iip = fsd.addLastINode(parent, dir, 
permission.getPermission(),
+true, Optional.empty());
 if (iip != null && aclEntries != null) {
   AclStorage.updateINodeAcl(dir, aclEntries, Snapshot.CURRENT_STATE_ID);
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
index 64bc46d90162..0f6ceae82489 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.util.Preconditions;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
@@ -43,6 +45,8 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
+import java.util.Optional;
+
 import static 
org.apache.hadoop.hdfs.protocol.FSLimitException.MaxDirectoryItemsExceededException;
 import static 
org.apache.hadoop.hdfs.protocol.FSLimitException.PathComponentTooLongException;
 
@@ -68,14 +72,18 @@ class FSDirRenameOp {
* Verify quota for rename operation where srcInodes[srcInodes.length-1] 
moves
* dstInodes[dstInodes.length-1]
*/
-  private static void verifyQuotaForRename(FSDirectory fsd, INodesInPath src,
-  INodesInPath dst) throws QuotaExceededException {
+  private static Pair, Optional> 
verifyQuotaForRename(
+  FSDirectory fsd, INodesInPath src, INodesInPath dst) throws 
QuotaExceededException {
+Optional srcDelta = Optional.empty();
+Optional dstDelta = Optional.empty();
 if (!fsd.getFSNamesystem().isImageLoaded() || fsd.shouldSkipQuotaChecks()) 
{
   // Do not check quota if edits log is still being processed
-  return;
+  return Pair.of(srcDelta, dstDelta);
 }
 int i = 0;
-while(src.getINode(i) == dst.getINode(i)) { i++; }
+while (src.getINode(i) == dst.getINode(i)) {
+  i++;
+}
 // src[i - 1] is the last common ancestor.
 BlockStoragePolicySuite bsps = fsd.getBlockStoragePolicySuite();
 // Assume dstParent existence check done by callers.
@@ -88,13 +96,19 @@ class FSDirRenameOp {
 final QuotaCounts delta = src.getLastINode()
 .computeQuotaUsage(bsps, storagePolicyID, false,
 Snapshot.CURRENT_STATE_ID);
+QuotaCounts srcQuota = new QuotaCounts.Builder().quotaCount(delta).build();
+srcDelta = Optional.of(srcQuota);
 
 // Reduce the required quota by dst that is being removed
 fin

(hadoop) branch trunk updated (5584efd8d418 -> 7012986fc3d2)

2024-03-06 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 5584efd8d418 HDFS-17396. BootstrapStandby should download rollback 
image during RollingUpgrade (#6583)
 add 7012986fc3d2 HDFS-17345. Add a metrics to record block report 
generating cost time. (#6475). Contributed by farmmamba.

No new revisions were added by this update.

Summary of changes:
 hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md | 2 ++
 .../java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java  | 1 +
 .../apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java  | 5 +
 3 files changed, 8 insertions(+)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (a897e745f598 -> 15af52954f32)

2024-02-26 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from a897e745f598 HDFS-17393. Remove unused FSNamesytemLock cond in 
FSNamesystem (#6567)
 add 15af52954f32 HDFS-17358. EC: infinite lease recovery caused by the 
length of RWR equals to zero or datanode does not have the replica. (#6509). 
Contributed by farmmamba.

No new revisions were added by this update.

Summary of changes:
 .../hdfs/server/datanode/BlockRecoveryWorker.java  | 44 +-
 .../datanode/erasurecode/StripedBlockReader.java   |  4 +-
 .../hadoop/hdfs/TestLeaseRecoveryStriped.java  | 29 ++
 3 files changed, 66 insertions(+), 11 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17342. Fix DataNode may invalidates normal block causing missing block (#6464). Contributed by Haiyang Hu.

2024-02-06 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5ad7737132df HDFS-17342. Fix DataNode may invalidates normal block 
causing missing block (#6464). Contributed by Haiyang Hu.
5ad7737132df is described below

commit 5ad7737132df34a03241afd948552b6a0a3a3060
Author: huhaiyang 
AuthorDate: Tue Feb 6 17:52:52 2024 +0800

HDFS-17342. Fix DataNode may invalidates normal block causing missing block 
(#6464). Contributed by Haiyang Hu.

Reviewed-by: ZanderXu 
Reviewed-by: Chengwei Wang <1139557...@qq.com>
Signed-off-by: Shuyan Zhang 
---
 .../server/datanode/DataNodeFaultInjector.java |  5 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 19 -
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 95 +-
 3 files changed, 114 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
index e9cdb2cc92d5..372271b4fb28 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
@@ -167,4 +167,9 @@ public class DataNodeFaultInjector {
* Just delay run diff record a while.
*/
   public void delayDiffRecord() {}
+
+  /**
+   * Just delay getMetaDataInputStream a while.
+   */
+  public void delayGetMetaDataInputStream() {}
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 27fcbb12faba..b1526c9860e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -63,6 +63,7 @@ import org.apache.hadoop.hdfs.ExtendedBlockId;
 import org.apache.hadoop.hdfs.server.common.AutoCloseDataSetLock;
 import org.apache.hadoop.hdfs.server.common.DataNodeLockManager;
 import org.apache.hadoop.hdfs.server.common.DataNodeLockManager.LockLevel;
+import org.apache.hadoop.hdfs.server.datanode.DataNodeFaultInjector;
 import org.apache.hadoop.hdfs.server.datanode.DataSetLockManager;
 import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
 import org.apache.hadoop.hdfs.server.datanode.FinalizedReplica;
@@ -247,6 +248,7 @@ class FsDatasetImpl implements FsDatasetSpi {
 if (info == null || !info.metadataExists()) {
   return null;
 }
+DataNodeFaultInjector.get().delayGetMetaDataInputStream();
 return info.getMetadataInputStream(0);
   }
 
@@ -2403,8 +2405,9 @@ class FsDatasetImpl implements FsDatasetSpi 
{
*
* @param bpid the block pool ID.
* @param block The block to be invalidated.
+   * @param checkFiles Whether to check data and meta files.
*/
-  public void invalidateMissingBlock(String bpid, Block block) {
+  public void invalidateMissingBlock(String bpid, Block block, boolean 
checkFiles) {
 
 // The replica seems is on its volume map but not on disk.
 // We can't confirm here is block file lost or disk failed.
@@ -2416,11 +2419,21 @@ class FsDatasetImpl implements 
FsDatasetSpi {
 // So remove if from volume map notify namenode is ok.
 try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl,
 bpid)) {
-  ReplicaInfo replica = volumeMap.remove(bpid, block);
-  invalidate(bpid, replica);
+  // Check if this block is on the volume map.
+  ReplicaInfo replica = volumeMap.get(bpid, block);
+  // Double-check block or meta file existence when checkFiles as true.
+  if (replica != null && (!checkFiles ||
+  (!replica.blockDataExists() || !replica.metadataExists( {
+volumeMap.remove(bpid, block);
+invalidate(bpid, replica);
+  }
 }
   }
 
+  public void invalidateMissingBlock(String bpid, Block block) {
+invalidateMissingBlock(bpid, block, true);
+  }
+
   /**
* Remove Replica from ReplicaMap.
*
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
index 2f068a6a69c6..5468473d9de0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl

(hadoop) branch trunk updated: HDFS-17339:Skip cacheReport when one blockPool does not have CacheBlock on this DataNode (#6456). Contributed by lei w.

2024-01-25 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ac471d7daa32 HDFS-17339:Skip cacheReport when one blockPool does not 
have CacheBlock on this DataNode (#6456). Contributed by lei w.
ac471d7daa32 is described below

commit ac471d7daa320a02cf31bc0864b9428ce8344925
Author: Lei313 <47049042+thinker...@users.noreply.github.com>
AuthorDate: Thu Jan 25 21:15:20 2024 +0800

HDFS-17339:Skip cacheReport when one blockPool does not have CacheBlock on 
this DataNode (#6456). Contributed by lei w.

Signed-off-by: Shuyan Zhang 
---
 .../java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java   | 4 
 1 file changed, 4 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 4bac0d8fb47f..13ff9549e020 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -503,6 +503,10 @@ class BPServiceActor implements Runnable {
 
   String bpid = bpos.getBlockPoolId();
   List blockIds = dn.getFSDataset().getCacheReport(bpid);
+  // Skip cache report
+  if (blockIds.isEmpty()) {
+return null;
+  }
   long createTime = monotonicNow();
 
   cmd = bpNamenode.cacheReport(bpRegistration, bpid, blockIds);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (6b80b1e60f1a -> caba9bbab3c6)

2024-01-24 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 6b80b1e60f1a Revert "YARN-11041. Replace all occurences of queuePath 
with the new QueuePath class - followup" (#6497)
 add caba9bbab3c6 HDFS-17346. Fix DirectoryScanner check mark the normal 
blocks as corrupt (#6476). Contributed by Haiyang Hu.

No new revisions were added by this update.

Summary of changes:
 .../server/datanode/DataNodeFaultInjector.java |  5 ++
 .../hdfs/server/datanode/DirectoryScanner.java |  1 +
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 17 +++--
 .../hdfs/server/datanode/TestDirectoryScanner.java | 86 ++
 4 files changed, 103 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17293. First packet data + checksum size will be set to 516 bytes when writing to a new block. (#6368). Contributed by farmmamba.

2024-01-21 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 54f7a6b12790 HDFS-17293. First packet data + checksum size will be set 
to 516 bytes when writing to a new block. (#6368). Contributed by farmmamba.
54f7a6b12790 is described below

commit 54f7a6b127908cebedf44f4a96ee06e12e98f0d6
Author: hfutatzhanghb 
AuthorDate: Mon Jan 22 11:50:51 2024 +0800

HDFS-17293. First packet data + checksum size will be set to 516 bytes when 
writing to a new block. (#6368). Contributed by farmmamba.

Reviewed-by: He Xiaoqiao 
Signed-off-by:  Shuyan Zhang 
---
 .../org/apache/hadoop/hdfs/DFSOutputStream.java|  9 +++--
 .../apache/hadoop/hdfs/TestDFSOutputStream.java| 41 ++
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index b6634eddc891..a1bfb7f5d594 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -536,8 +536,13 @@ public class DFSOutputStream extends FSOutputSummer
 }
 
 if (!getStreamer().getAppendChunk()) {
-  final int psize = (int) Math
-  .min(blockSize - getStreamer().getBytesCurBlock(), writePacketSize);
+  int psize = 0;
+  if (blockSize == getStreamer().getBytesCurBlock()) {
+psize = writePacketSize;
+  } else {
+psize = (int) Math
+.min(blockSize - getStreamer().getBytesCurBlock(), 
writePacketSize);
+  }
   computePacketChunkSize(psize, bytesPerChecksum);
 }
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
index 0f1b965cc264..bdb91f91bc5e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
@@ -49,6 +49,7 @@ import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage;
+import org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader;
 import org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
@@ -58,6 +59,7 @@ import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.PathUtils;
 import org.apache.hadoop.test.Whitebox;
+import org.apache.hadoop.util.DataChecksum;
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -508,6 +510,45 @@ public class TestDFSOutputStream {
 }
   }
 
+  @Test(timeout=6)
+  public void testFirstPacketSizeInNewBlocks() throws IOException {
+final long blockSize = (long) 1024 * 1024;
+MiniDFSCluster dfsCluster = cluster;
+DistributedFileSystem fs = dfsCluster.getFileSystem();
+Configuration dfsConf = fs.getConf();
+
+EnumSet flags = EnumSet.of(CreateFlag.CREATE);
+try(FSDataOutputStream fos = fs.create(new Path("/testfile.dat"),
+FsPermission.getDefault(),
+flags, 512, (short)3, blockSize, null)) {
+
+  DataChecksum crc32c = DataChecksum.newDataChecksum(
+  DataChecksum.Type.CRC32C, 512);
+
+  long loop = 0;
+  Random r = new Random();
+  byte[] buf = new byte[(int) blockSize];
+  r.nextBytes(buf);
+  fos.write(buf);
+  fos.hflush();
+
+  int chunkSize = crc32c.getBytesPerChecksum() + crc32c.getChecksumSize();
+  int packetContentSize = (dfsConf.getInt(DFS_CLIENT_WRITE_PACKET_SIZE_KEY,
+  DFS_CLIENT_WRITE_PACKET_SIZE_DEFAULT) -
+  PacketHeader.PKT_MAX_HEADER_LEN) / chunkSize * chunkSize;
+
+  while (loop < 20) {
+r.nextBytes(buf);
+fos.write(buf);
+fos.hflush();
+loop++;
+Assert.assertEquals(((DFSOutputStream) 
fos.getWrappedStream()).packetSize,
+packetContentSize);
+  }
+}
+fs.delete(new Path("/testfile.dat"), true);
+  }
+
   @AfterClass
   public static void tearDown() {
 if (cluster != null) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e

(hadoop) branch trunk updated: HDFS-17331:Fix Blocks are always -1 and DataNode version are always UNKNOWN in federationhealth.html (#6429). Contributed by lei w.

2024-01-18 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cc4c4be1b7bd HDFS-17331:Fix Blocks are always -1 and DataNode version 
are always UNKNOWN in federationhealth.html (#6429). Contributed by lei w.
cc4c4be1b7bd is described below

commit cc4c4be1b7bda8f5869241a50197699da0f99f4d
Author: Lei313 <47049042+thinker...@users.noreply.github.com>
AuthorDate: Thu Jan 18 21:10:54 2024 +0800

HDFS-17331:Fix Blocks are always -1 and DataNode version are always UNKNOWN 
in federationhealth.html (#6429). Contributed by lei w.

Signed-off-by: Shuyan Zhang 
---
 .../org/apache/hadoop/hdfs/protocol/DatanodeInfo.java | 15 +--
 .../org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java |  6 +-
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto  |  1 +
 .../server/federation/metrics/NamenodeBeanMetrics.java|  2 +-
 .../server/federation/router/RouterClientProtocol.java|  9 +++--
 5 files changed, 27 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
index fbe6bcc4629d..d3e6e7115041 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
@@ -143,7 +143,7 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   final int xceiverCount, final String networkLocation,
   final AdminStates adminState, final String upgradeDomain,
   final long lastBlockReportTime, final long lastBlockReportMonotonic,
-   final int blockCount) {
+  final int blockCount, final String softwareVersion) {
 super(ipAddr, hostName, datanodeUuid, xferPort, infoPort, infoSecurePort,
 ipcPort);
 this.capacity = capacity;
@@ -162,6 +162,7 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
 this.lastBlockReportTime = lastBlockReportTime;
 this.lastBlockReportMonotonic = lastBlockReportMonotonic;
 this.numBlocks = blockCount;
+this.softwareVersion =  softwareVersion;
   }
 
   /** Network location name. */
@@ -699,6 +700,7 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
 private long lastBlockReportTime = 0L;
 private long lastBlockReportMonotonic = 0L;
 private int numBlocks = 0;
+private String softwareVersion;
 
 // Please use setNumBlocks explicitly to set numBlocks as this method 
doesn't have
 // sufficient info about numBlocks
@@ -718,6 +720,9 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   this.upgradeDomain = from.getUpgradeDomain();
   this.lastBlockReportTime = from.getLastBlockReportTime();
   this.lastBlockReportMonotonic = from.getLastBlockReportMonotonic();
+  if (from.getSoftwareVersion() != null) {
+this.softwareVersion = from.getSoftwareVersion();
+  }
   setNodeID(from);
   return this;
 }
@@ -844,18 +849,24 @@ public class DatanodeInfo extends DatanodeID implements 
Node {
   this.lastBlockReportMonotonic = time;
   return this;
 }
+
 public DatanodeInfoBuilder setNumBlocks(int blockCount) {
   this.numBlocks = blockCount;
   return this;
 }
 
+public DatanodeInfoBuilder setSoftwareVersion(String dnVersion) {
+  this.softwareVersion = dnVersion;
+  return this;
+}
+
 public DatanodeInfo build() {
   return new DatanodeInfo(ipAddr, hostName, datanodeUuid, xferPort,
   infoPort, infoSecurePort, ipcPort, capacity, dfsUsed, nonDfsUsed,
   remaining, blockPoolUsed, cacheCapacity, cacheUsed, lastUpdate,
   lastUpdateMonotonic, xceiverCount, location, adminState,
   upgradeDomain, lastBlockReportTime, lastBlockReportMonotonic,
-  numBlocks);
+  numBlocks, softwareVersion);
 }
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index 26ee5de2886e..b6d3bc7227d7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -385,6 +385,9 @@ public class PBHelperClient {
 if (info.getUpgradeDomain() != null) {
   builder.setUpgradeDomain(info.getUpgradeDomain());
 }
+if (info.getSoftwareVersion() != null) {
+  builder.setSoftwareVersion(info.getSoft

(hadoop) branch trunk updated (eeb657e85f3f -> ba6ada73acc2)

2024-01-17 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from eeb657e85f3f HADOOP-19033. S3A: disable checksums when 
fs.s3a.checksum.validation = false (#6441)
 add ba6ada73acc2 HDFS-17337. RPC RESPONSE time seems not exactly accurate 
when using FSEditLogAsync. (#6439). Contributed by farmmamba.

No new revisions were added by this update.

Summary of changes:
 .../hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17291. DataNode metric bytesWritten is not totally accurate in some situations. (#6360). Contributed by farmmamba.

2024-01-13 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a30681077b8c HDFS-17291. DataNode metric bytesWritten is not totally 
accurate in some situations. (#6360). Contributed by farmmamba.
a30681077b8c is described below

commit a30681077b8c924954b6c46755370d110aeeda12
Author: hfutatzhanghb 
AuthorDate: Sat Jan 13 20:45:00 2024 +0800

HDFS-17291. DataNode metric bytesWritten is not totally accurate in some 
situations. (#6360). Contributed by farmmamba.

Reviewed-by: huangzhaobo 
Signed-off-by:  Shuyan Zhang 
---
 .../main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
index 86ee6bd431ef..38b03f8d6a24 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
@@ -842,7 +842,7 @@ class BlockReceiver implements Closeable {
   
   replicaInfo.setLastChecksumAndDataLen(offsetInBlock, lastCrc);
 
-  datanode.metrics.incrBytesWritten(len);
+  datanode.metrics.incrBytesWritten(numBytesToDisk);
   datanode.metrics.incrTotalWriteTime(duration);
 
   manageWriterOsCache(offsetInBlock, seqno);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (2f1e1558b6fc -> ead7b7f5656c)

2024-01-13 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 2f1e1558b6fc HADOOP-19004. S3A: Support Authentication through 
HttpSigner API (#6324)
 add ead7b7f5656c HDFS-17289. Considering the size of non-lastBlocks equals 
to complete block size can cause append failure. (#6357). Contributed by 
farmmamba.

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdfs/DFSOutputStream.java|  8 ++---
 .../org/apache/hadoop/hdfs/TestFileAppend3.java| 36 ++
 2 files changed, 40 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17283. Change the name of variable SECOND in HdfsClientConfigKeys. (#6339). Contributed by farmmamba.

2024-01-04 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d5468d84baaa HDFS-17283. Change the name of variable SECOND in 
HdfsClientConfigKeys. (#6339). Contributed by farmmamba.
d5468d84baaa is described below

commit d5468d84baaafbb3cd1245639be401d63928aaa7
Author: hfutatzhanghb 
AuthorDate: Thu Jan 4 19:53:47 2024 +0800

HDFS-17283. Change the name of variable SECOND in HdfsClientConfigKeys. 
(#6339). Contributed by farmmamba.

Reviewed-by: Xing Lin 
Signed-off-by: Shuyan Zhang 
---
 .../java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java| 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 93e214a403ee..efaa5601ad81 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -25,8 +25,8 @@ import java.util.concurrent.TimeUnit;
 /** Client configuration properties */
 @InterfaceAudience.Private
 public interface HdfsClientConfigKeys {
-  long SECOND = 1000L;
-  long MINUTE = 60 * SECOND;
+  long MS_PER_SECOND = 1000L;
+  long MINUTE = 60 * MS_PER_SECOND;
 
   String  DFS_BLOCK_SIZE_KEY = "dfs.blocksize";
   longDFS_BLOCK_SIZE_DEFAULT = 128*1024*1024;
@@ -423,7 +423,7 @@ public interface HdfsClientConfigKeys {
   int COUNT_LIMIT_DEFAULT = 2048;
   String  COUNT_RESET_TIME_PERIOD_MS_KEY =
   PREFIX + "count-reset-time-period-ms";
-  longCOUNT_RESET_TIME_PERIOD_MS_DEFAULT = 10*SECOND;
+  longCOUNT_RESET_TIME_PERIOD_MS_DEFAULT = 10 * MS_PER_SECOND;
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17275: judge whether the block has been deleted in the block report (#6335). Contributed by lei w.

2023-12-26 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 773dd7cc85d3 HDFS-17275: judge whether the block has been deleted in 
the block report (#6335). Contributed by lei w.
773dd7cc85d3 is described below

commit 773dd7cc85d30bebb42d2f6f59e5e307df332f59
Author: Lei313 <47049042+thinker...@users.noreply.github.com>
AuthorDate: Tue Dec 26 18:25:02 2023 +0800

HDFS-17275: judge whether the block has been deleted in the block report 
(#6335). Contributed by lei w.

Reviewed-by: He Xiaoqiao 
Signed-off-by: Shuyan Zhang 
---
 .../org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 76efb353004d..56c86482bd17 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -3425,7 +3425,7 @@ public class BlockManager implements BlockStatsMXBean {
 
 // find block by blockId
 BlockInfo storedBlock = getStoredBlock(block);
-if(storedBlock == null) {
+if (storedBlock == null) {
   // If blocksMap does not contain reported block id,
   // The replica should be removed from Datanode, and set NumBytes to 
BlockCommand.No_ACK to
   // avoid useless report to NameNode from Datanode when complete to 
process it.
@@ -3439,8 +3439,8 @@ public class BlockManager implements BlockStatsMXBean {
 // Block is on the NN
 LOG.debug("In memory blockUCState = {}", ucState);
 
-// Ignore replicas already scheduled to be removed from the DN
-if(invalidateBlocks.contains(dn, block)) {
+// Ignore replicas already scheduled to be removed from the DN or had been 
deleted
+if (invalidateBlocks.contains(dn, block) || storedBlock.isDeleted()) {
   return storedBlock;
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HDFS-17152. Fix the documentation of count command in FileSystemShell.md. (#5939). Contributed by farmmamba.

2023-12-11 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e91daae31872 HDFS-17152. Fix the documentation of count command in 
FileSystemShell.md. (#5939). Contributed by farmmamba.
e91daae31872 is described below

commit e91daae31872531d11cc05a14b2dcfe38ce626bb
Author: hfutatzhanghb 
AuthorDate: Mon Dec 11 16:53:37 2023 +0800

HDFS-17152. Fix the documentation of count command in FileSystemShell.md. 
(#5939). Contributed by farmmamba.

Reviewed-by: Shilun Fan 
Signed-off-by:  Shuyan Zhang 
---
 .../hadoop-common/src/site/markdown/FileSystemShell.md| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
index 451b33d74faa..bbc927714413 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
@@ -137,13 +137,13 @@ Usage: `hadoop fs -count [-q] [-h] [-v] [-x] [-t 
[]] [-u] [-e] [-s
 
 Count the number of directories, files and bytes under the paths that match 
the specified file pattern. Get the quota and the usage. The output columns 
with -count are: DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
-The -u and -q options control what columns the output contains.  -q means show 
quotas, -u limits the output to show quotas and usage only.
+The -u and -q options control what columns the output contains.  -q means show 
quotas and usage, -u limits the output to show quotas only.
 
 The output columns with -count -q are: QUOTA, REMAINING\_QUOTA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
 The output columns with -count -u are: QUOTA, REMAINING\_QUOTA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, PATHNAME
 
-The -t option shows the quota and usage for each storage type. The -t option 
is ignored if -u or -q option is not given. The list of possible parameters 
that can be used in -t option(case insensitive except the parameter ""): "", 
"all", "ram_disk", "ssd", "disk" or "archive".
+The -t option shows the quota and usage for each storage type. The -t option 
is ignored if -u or -q option is not given. The list of possible parameters 
that can be used in -t option(case insensitive except the parameter): "", 
"all", "ram_disk", "ssd", "disk" or "archive".
 
 The -h option shows sizes in human readable format.
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-18989. Use thread pool to improve the speed of creating control files in TestDFSIO (#6294). Contributed by farmmamba.

2023-12-08 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e91aec930fd0 HADOOP-18989. Use thread pool to improve the speed of 
creating control files in TestDFSIO (#6294). Contributed by farmmamba.
e91aec930fd0 is described below

commit e91aec930fd033e71b8bcecbb8e7873c28697f0b
Author: hfutatzhanghb 
AuthorDate: Fri Dec 8 17:15:58 2023 +0800

HADOOP-18989. Use thread pool to improve the speed of creating control 
files in TestDFSIO (#6294). Contributed by farmmamba.

Signed-off-by: Shuyan Zhang 
---
 .../test/java/org/apache/hadoop/fs/TestDFSIO.java  | 90 +++---
 1 file changed, 81 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
index 40f12295be1a..6ee143dcf412 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
@@ -32,7 +32,16 @@ import java.text.DecimalFormat;
 import java.util.Collection;
 import java.util.Date;
 import java.util.StringTokenizer;
+import java.util.concurrent.CompletionService;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorCompletionService;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
 import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
@@ -116,6 +125,10 @@ public class TestDFSIO implements Tool {
   "test.io.block.storage.policy";
   private static final String ERASURE_CODE_POLICY_NAME_KEY =
   "test.io.erasure.code.policy";
+  private ExecutorService excutorService = Executors.newFixedThreadPool(
+  2 * Runtime.getRuntime().availableProcessors());
+  private CompletionService completionService =
+  new ExecutorCompletionService<>(excutorService);
 
   static{
 Configuration.addDefaultResource("hdfs-default.xml");
@@ -289,12 +302,43 @@ public class TestDFSIO implements Tool {
 bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime);
   }
 
+  private class ControlFileCreateTask implements Runnable {
+private SequenceFile.Writer writer = null;
+private String name;
+private long nrBytes;
+
+ControlFileCreateTask(SequenceFile.Writer writer, String name,
+long nrBytes) {
+  this.writer = writer;
+  this.name = name;
+  this.nrBytes = nrBytes;
+}
+
+@Override
+public void run() {
+  try {
+writer.append(new Text(name), new LongWritable(nrBytes));
+  } catch (Exception e) {
+LOG.error(e.getLocalizedMessage());
+  } finally {
+if (writer != null) {
+  try {
+writer.close();
+  } catch (IOException e) {
+LOG.error(e.toString());
+  }
+}
+writer = null;
+  }
+}
+  }
+
   @SuppressWarnings("deprecation")
   private void createControlFile(FileSystem fs,
   long nrBytes, // in bytes
   int nrFiles
 ) throws IOException {
-LOG.info("creating control file: "+nrBytes+" bytes, "+nrFiles+" files");
+LOG.info("creating control file: " + nrBytes + " bytes, " + nrFiles + " 
files");
 final int maxDirItems = config.getInt(
 DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY,
 DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_DEFAULT);
@@ -308,7 +352,7 @@ public class TestDFSIO implements Tool {
 
 fs.delete(controlDir, true);
 
-for(int i=0; i < nrFiles; i++) {
+for (int i = 0; i < nrFiles; i++) {
   String name = getFileName(i);
   Path controlFile = new Path(controlDir, "in_file_" + name);
   SequenceFile.Writer writer = null;
@@ -316,19 +360,42 @@ public class TestDFSIO implements Tool {
 writer = SequenceFile.createWriter(fs, config, controlFile,
Text.class, LongWritable.class,
CompressionType.NONE);
-writer.append(new Text(name), new LongWritable(nrBytes));
+Runnable controlFileCreateT

(hadoop) branch trunk updated: HDFS-17243. Add the parameter storage type for getBlocks method (#6238). Contributed by Haiyang Hu.

2023-11-05 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4ef2322b6d7c HDFS-17243. Add the parameter storage type for getBlocks 
method (#6238). Contributed by Haiyang Hu.
4ef2322b6d7c is described below

commit 4ef2322b6d7c1d1ae1cd9e62042f2db0ae42fc7c
Author: huhaiyang 
AuthorDate: Mon Nov 6 11:20:25 2023 +0800

HDFS-17243. Add the parameter storage type for getBlocks method (#6238). 
Contributed by Haiyang Hu.

Reviewed-by: He Xiaoqiao 
Reviewed-by: Tao Li 
Signed-off-by: Shuyan Zhang 
---
 .../federation/router/RouterNamenodeProtocol.java  |   7 +-
 .../server/federation/router/RouterRpcServer.java  |   4 +-
 .../server/federation/router/TestRouterRpc.java|   6 +-
 .../NamenodeProtocolServerSideTranslatorPB.java|   4 +-
 .../protocolPB/NamenodeProtocolTranslatorPB.java   |  11 ++-
 .../hadoop/hdfs/server/balancer/Dispatcher.java|   2 +-
 .../hdfs/server/balancer/NameNodeConnector.java|   5 +-
 .../hdfs/server/blockmanagement/BlockManager.java  |   7 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |   9 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java|   4 +-
 .../hdfs/server/protocol/NamenodeProtocol.java |   4 +-
 .../src/main/proto/NamenodeProtocol.proto  |   1 +
 .../java/org/apache/hadoop/hdfs/TestGetBlocks.java | 110 ++---
 .../hadoop/hdfs/server/balancer/TestBalancer.java  |   2 +-
 .../balancer/TestBalancerWithHANameNodes.java  |   2 +-
 15 files changed, 142 insertions(+), 36 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
index 278d282fd7e6..a5a047d115cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
@@ -21,6 +21,7 @@ import java.io.IOException;
 import java.util.Map;
 import java.util.Map.Entry;
 
+import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
@@ -53,7 +54,7 @@ public class RouterNamenodeProtocol implements 
NamenodeProtocol {
 
   @Override
   public BlocksWithLocations getBlocks(DatanodeInfo datanode, long size,
-  long minBlockSize, long hotBlockTimeInterval) throws IOException {
+  long minBlockSize, long hotBlockTimeInterval, StorageType storageType) 
throws IOException {
 rpcServer.checkOperation(OperationCategory.READ);
 
 // Get the namespace where the datanode is located
@@ -79,8 +80,8 @@ public class RouterNamenodeProtocol implements 
NamenodeProtocol {
 if (nsId != null) {
   RemoteMethod method = new RemoteMethod(
   NamenodeProtocol.class, "getBlocks", new Class[]
-  {DatanodeInfo.class, long.class, long.class, long.class},
-  datanode, size, minBlockSize, hotBlockTimeInterval);
+  {DatanodeInfo.class, long.class, long.class, long.class, 
StorageType.class},
+  datanode, size, minBlockSize, hotBlockTimeInterval, storageType);
   return rpcClient.invokeSingle(nsId, method, BlocksWithLocations.class);
 }
 return null;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index cae61b7d927d..2aa2eae5305d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -1612,9 +1612,9 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol,
 
   @Override // NamenodeProtocol
   public BlocksWithLocations getBlocks(DatanodeInfo datanode, long size,
-  long minBlockSize, long hotBlockTimeInterval) throws IOException {
+  long minBlockSize, long hotBlockTimeInterval, StorageType storageType) 
throws IOException {
 return nnProto.getBlocks(datanode, size, minBlockSize,
-hotBlockTimeInterval);
+hotBlockTimeInterval, storageType);
   }
 
   @Override // NamenodeProtocol
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rb

[hadoop] branch trunk updated (00f8cdcb0f20 -> c8abca300452)

2023-10-16 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 00f8cdcb0f20 YARN-11571. [GPG] Add Information About YARN GPG in 
Federation.md (#6158) Contributed by Shilun Fan.
 add c8abca300452 HDFS-17210. Optimize AvailableSpaceBlockPlacementPolicy. 
(#6113). Contributed by GuoPhilipse.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  6 ++
 .../AvailableSpaceBlockPlacementPolicy.java| 33 +++--
 .../src/main/resources/hdfs-default.xml| 11 +++
 .../TestAvailableSpaceBlockPlacementPolicy.java| 81 +-
 4 files changed, 124 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (b00d605832ea -> 85af6c3a2850)

2023-10-09 Thread zhangshuyan
This is an automated email from the ASF dual-hosted git repository.

zhangshuyan pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from b00d605832ea YARN-9048. Add znode hierarchy in Federation ZK State 
Store. (#6016)
 add 85af6c3a2850 HDFS-17217. Add lifeline RPC start up log when 
NameNode#startCommonServices (#6154). Contributed by  Haiyang Hu.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/hdfs/server/namenode/NameNode.java | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org