[ https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464207#comment-17464207 ]
Hadoop QA commented on HADOOP-15918: ------------------------------------ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red}{color} | {color:red} HADOOP-15918 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15918 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12948108/HADOOP-15918.002.patch | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/253/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Namenode gets stuck when deleting large dir in trash > ---------------------------------------------------- > > Key: HADOOP-15918 > URL: https://issues.apache.org/jira/browse/HADOOP-15918 > Project: Hadoop Common > Issue Type: Improvement > Affects Versions: 2.8.2, 3.1.0 > Reporter: Tao Jie > Assignee: Tao Jie > Priority: Major > Attachments: HADOOP-15918.001.patch, HADOOP-15918.002.patch, > HDFS-13769.001.patch, HDFS-13769.002.patch, HDFS-13769.003.patch, > HDFS-13769.004.patch > > > Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a > long time when deleting trash dir with a large mount of data. We found log in > namenode: > {quote} > 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem > (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for > 23018 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033) > org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820) > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047) > {quote} > One simple solution is to avoid deleting large data in one delete RPC call. > We implement a trashPolicy that divide the delete operation into several > delete RPCs, and each single deletion would not delete too many files. > Any thought? [~linyiqun] -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org