[ https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12864268#action_12864268 ]
Hadoop QA commented on HADOOP-6631: ----------------------------------- +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12443688/HADOOP-6631.v1.patch against trunk revision 940989. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/504/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/504/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/504/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/504/console This message is automatically generated. > FileUtil.fullyDelete() should continue to delete other files despite failure > at any level. > ------------------------------------------------------------------------------------------ > > Key: HADOOP-6631 > URL: https://issues.apache.org/jira/browse/HADOOP-6631 > Project: Hadoop Common > Issue Type: Bug > Components: fs, util > Reporter: Vinod K V > Assignee: Ravi Gummadi > Fix For: 0.22.0 > > Attachments: hadoop-6631-y20s-1.patch, hadoop-6631-y20s-2.patch, > HADOOP-6631.patch, HADOOP-6631.patch, HADOOP-6631.v1.patch > > > Ravi commented about this on HADOOP-6536. Paraphrasing... > Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other > files/directories if it is unable to delete a file/dir(say because of not > having permissions to delete that file/dir) anywhere under myDir. This is > because we return from method if the recursive call "if(!fullyDelete()) > {return false;}" fails at any level of recursion. > Shouldn't it continue with deletion of other files/dirs continuing in the for > loop instead of returning false here ? > I guess fullyDelete() should delete as many files as possible(similar to 'rm > -rf'). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.