[ https://issues.apache.org/jira/browse/HDFS-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17412170#comment-17412170 ]
Renukaprasad C commented on HDFS-16191: --------------------------------------- Thanks [~xinglin] for review & feedback. org.apache.hadoop.util.PartitionedGSet#addNewPartitionIfNeeded – Here are check the SIZE of the partition and create/return new partition if the size exceeds otherwise the same partition. private PartitionEntry addNewPartitionIfNeeded( PartitionEntry curPart, K key) { if(curPart.size() < DEFAULT_PARTITION_CAPACITY * DEFAULT_PARTITION_OVERFLOW || curPart.contains(key)) { return curPart; } return addNewPartition(key); } Here we add new partition whenever the size exceeds the threshold configured. Once new partition is added and some inodes added into it, which fails while iterating (As we iterated only static partitions). With the above patch, i had verified the functionality & related UTs, which are working fine. One issue i found here is, Static partitions were added as => range key[0, 16385],range key[1, 16385],....range key[25, 16385], where as dynamic partitions were added like inodefile[0, <X InodeId>], inodefile[0, Y InodeId].... When these nodes are compared to get the partition, we get the newly added partition iNodeFile[0, X inodeId] after range key[0, 16385] is full. Let me check this scenario once again, any other issue will discuss. Meanwhile you can also check the scenario when one partition gets full. > [FGL] Fix FSImage loading issues on dynamic partitions > ------------------------------------------------------ > > Key: HDFS-16191 > URL: https://issues.apache.org/jira/browse/HDFS-16191 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Renukaprasad C > Assignee: Renukaprasad C > Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > When new partitions gets added into PartitionGSet, iterator do not consider > the new partitions. Which always iterate on Static Partition count. This lead > to full of warn messages as below. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139780 when saving the leases. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139781 when saving the leases. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139784 when saving the leases. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139785 when saving the leases. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139786 when saving the leases. > 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139788 when saving the leases. > 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139789 when saving the leases. > 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139790 when saving the leases. > 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139791 when saving the leases. > 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139793 when saving the leases. > 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139795 when saving the leases. > 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139796 when saving the leases. > 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139797 when saving the leases. > 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139800 when saving the leases. > 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find > inode 139801 when saving the leases. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org