[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13592830#comment-13592830
 ] 

Chris Nauroth commented on HADOOP-8973:
---------------------------------------

That's interesting.  Yes, I believe this is a bug in the existing code for the 
other overload of {{DiskChecker#checkDir}}.

For example, suppose a dfs.datanode.data.dir on the local file system with 
owner "foo" and perms set to 700.  Now suppose we launch datanode as user 
"bar".  {{DiskChecker#checkDir}} will just look for 700 and not consider the 
running user, so it will think that the directory is usable.  Then, it would 
experience an I/O error later whenever the process first tries to use that 
directory.

I'll file a separate jira for this.

                
> DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
> ACLs
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-8973
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8973
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: util
>    Affects Versions: trunk-win
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HADOOP-8973-branch-trunk-win.patch
>
>
> DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
> check if a directory is inaccessible.  These APIs are not reliable on Windows 
> with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to