[ 
https://issues.apache.org/jira/browse/HDFS-10650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15388627#comment-15388627
 ] 

Chris Nauroth commented on HDFS-10650:
--------------------------------------

I'm not aware of any history behind an intentional choice to use 666 as the 
default here.  It looks incorrect to me.  The only thing somewhat related that 
I remember is HADOOP-9155, which introduced the split of file vs. directory 
default permissions, but that didn't touch the {{applyUMask}} logic.  It would 
be good to do a thorough review of all the code paths that end up routing 
through {{DFSClient#applyUMask}} to make sure this is safe.

> DFSClient#mkdirs and DFSClient#primitiveMkdir should use default directory 
> permission
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-10650
>                 URL: https://issues.apache.org/jira/browse/HDFS-10650
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: John Zhuge
>            Assignee: John Zhuge
>            Priority: Minor
>         Attachments: HDFS-10650.001.patch, HDFS-10650.002.patch
>
>
> These 2 DFSClient methods should use default directory permission to create a 
> directory.
> {code:java}
>   public boolean mkdirs(String src, FsPermission permission,
>       boolean createParent) throws IOException {
>     if (permission == null) {
>       permission = FsPermission.getDefault();
>     }
> {code}
> {code:java}
>   public boolean primitiveMkdir(String src, FsPermission absPermission, 
>     boolean createParent)
>     throws IOException {
>     checkOpen();
>     if (absPermission == null) {
>       absPermission = 
>         FsPermission.getDefault().applyUMask(dfsClientConf.uMask);
>     } 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to