[ https://issues.apache.org/jira/browse/HDFS-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057771#comment-14057771 ]
Jing Zhao commented on HDFS-6651: --------------------------------- I think one solution is that we do not count snapshot diff into namespace quota. In this way we will not have {{QuotaExceededException}} for deletion, also we can simplify the quota calculation with snapshots. What do you think [~szetszwo]? > Deletion failure can leak inodes permanently. > --------------------------------------------- > > Key: HDFS-6651 > URL: https://issues.apache.org/jira/browse/HDFS-6651 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Kihwal Lee > Priority: Critical > > As discussed in HDFS-6618, if a deletion of tree fails in the middle, any > collected inodes and blocks will not be removed from {{INodeMap}} and > {{BlocksMap}}. > Since fsimage is saved by iterating over {{INodeMap}}, the leak will persist > across name node restart. Although blanked out inodes will not have reference > to blocks, blocks will still refer to the inode as {{BlockCollection}}. As > long as it is not null, blocks will live on. The leaked blocks from blanked > out inodes will go away after restart. > Options (when delete fails in the middle) > - Complete the partial delete: edit log the partial delete and remove inodes > and blocks. > - Somehow undo the partial delete. > - Check quota for snapshot diff beforehand for the whole subtree. > - Ignore quota check during delete even if snapshot is present. -- This message was sent by Atlassian JIRA (v6.2#6252)