[ 
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod K V updated HADOOP-6631:
------------------------------

    Attachment: HADOOP-6631-20100506-ydist.final.txt

Patch for yahoo! dist 20 security branch. Not for commit here.

> FileUtil.fullyDelete() should continue to delete other files despite failure 
> at any level.
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6631
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs, util
>            Reporter: Vinod K V
>            Assignee: Ravi Gummadi
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-6631-20100505.txt, 
> HADOOP-6631-20100506-ydist.final.txt, HADOOP-6631-20100506.2.txt, 
> hadoop-6631-y20s-1.patch, hadoop-6631-y20s-2.patch, HADOOP-6631.patch, 
> HADOOP-6631.patch, HADOOP-6631.v1.patch
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other 
> files/directories if it is unable to delete a file/dir(say because of not 
> having permissions to delete that file/dir) anywhere under myDir. This is 
> because we return from method if the recursive call "if(!fullyDelete()) 
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for 
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm 
> -rf').

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to