[ https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wei-Chiu Chuang updated HDFS-11661: ----------------------------------- Attachment: HDFS-11661.001.patch Here's my proof of concept fix to eliminate includedNodes while fixing renamed snapshotted files getting counted twice. It uses FSDirectory.inodeMap to determine whether a deleted snapshotted inode is actually deleted or renamed, and whether the renamed inode remains under the same du subtree. The v001 patch assumes the du root is a directory. I have not yet considered the cases when files are truncated or appended, and I have not considered the case if there are symlinks in the directory. > GetContentSummary uses excessive amounts of memory > -------------------------------------------------- > > Key: HDFS-11661 > URL: https://issues.apache.org/jira/browse/HDFS-11661 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.8.0, 3.0.0-alpha2 > Reporter: Nathan Roberts > Assignee: Wei-Chiu Chuang > Priority: Blocker > Attachments: HDFS-11661.001.patch, Heap growth.png > > > ContentSummaryComputationContext::nodeIncluded() is being used to keep track > of all INodes visited during the current content summary calculation. This > can be all of the INodes in the filesystem, making for a VERY large hash > table. This simply won't work on large filesystems. > We noticed this after upgrading a namenode with ~100Million filesystem > objects was spending significantly more time in GC. Fortunately this system > had some memory breathing room, other clusters we have will not run with this > additional demand on memory. > This was added as part of HDFS-10797 as a way of keeping track of INodes that > have already been accounted for - to avoid double counting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org