[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5941: HDFS-17154. EC: Fix bug in updateBlockForPipeline after failover.
Hexiaoqiao commented on code in PR #5941: URL: https://github.com/apache/hadoop/pull/5941#discussion_r1294147667 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java: ## @@ -5950,8 +5952,26 @@ LocatedBlock bumpBlockGenerationStamp(ExtendedBlock block, block.setGenerationStamp(nextGenerationStamp( blockManager.isLegacyBlock(block.getLocalBlock(; - locatedBlock = BlockManager.newLocatedBlock( - block, file.getLastBlock(), null, -1); + BlockInfo lastBlockInfo = file.getLastBlock(); + locatedBlock = BlockManager.newLocatedBlock(block, lastBlockInfo, + null, -1); + if (lastBlockInfo.isStriped() && + ((BlockInfoStriped) lastBlockInfo).getTotalBlockNum() > + ((LocatedStripedBlock) locatedBlock).getBlockIndices().length) { +// The location info in BlockUnderConstructionFeature may not be +// complete after a failover, so we just return all block tokens for a +// striped block. This will disrupt the correspondence between +// LocatedStripedBlock.blockIndices and LocatedStripedBlock.locs, +// which is not used in client side. The correspondence between +// LocatedStripedBlock.blockIndices and LocatedBlock.blockToken is +// ensured. +byte[] indices = +new byte[((BlockInfoStriped) lastBlockInfo).getTotalBlockNum()]; +for (int i = 0; i < indices.length; ++i) { + indices[i] = (byte) i; Review Comment: Got it. +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5941: HDFS-17154. EC: Fix bug in updateBlockForPipeline after failover.
Hexiaoqiao commented on code in PR #5941: URL: https://github.com/apache/hadoop/pull/5941#discussion_r1293059482 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java: ## @@ -5950,8 +5952,26 @@ LocatedBlock bumpBlockGenerationStamp(ExtendedBlock block, block.setGenerationStamp(nextGenerationStamp( blockManager.isLegacyBlock(block.getLocalBlock(; - locatedBlock = BlockManager.newLocatedBlock( - block, file.getLastBlock(), null, -1); + BlockInfo lastBlockInfo = file.getLastBlock(); + locatedBlock = BlockManager.newLocatedBlock(block, lastBlockInfo, + null, -1); + if (lastBlockInfo.isStriped() && + ((BlockInfoStriped) lastBlockInfo).getTotalBlockNum() > + ((LocatedStripedBlock) locatedBlock).getBlockIndices().length) { +// The location info in BlockUnderConstructionFeature may not be +// complete after a failover, so we just return all block tokens for a +// striped block. This will disrupt the correspondence between +// LocatedStripedBlock.blockIndices and LocatedStripedBlock.locs, +// which is not used in client side. The correspondence between +// LocatedStripedBlock.blockIndices and LocatedBlock.blockToken is +// ensured. +byte[] indices = +new byte[((BlockInfoStriped) lastBlockInfo).getTotalBlockNum()]; +for (int i = 0; i < indices.length; ++i) { + indices[i] = (byte) i; Review Comment: Was the index continuous here? is it possible that the seq is [0,2,3,4,5,8] for one 6-3-1024k files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org