Hello --

As far as I can tell, "hadoop dfs -rmr" only checks the permissions of the directory to be deleted and it's parent. Unlike Unix, however, it does not seem to check the permissions of the directories / files contained within the directory to be deleted.

Is this by design? It seems dangerous. For instance, we have a directory where we want to allow people to deposit common resources for a project. Its permissions need to be 777, otherwise only one person can write to it. But with 777 permissions, any fool can accidentally wipe it.

(Of course, if we have /trash set up, accidental writes are not as big a deal, but still ...)

Thoughts / comments? Is there a way to make -rmr check the permissions of the files within the directories it's deleting, just as unix does? If not, is this a legit feature request? (I checked JIRA, but I didn't find anything on this ...)

Thanks,
Brian

Reply via email to