[ 
https://issues.apache.org/jira/browse/HADOOP-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12531869
 ] 

Hairong Kuang commented on HADOOP-1875:
---------------------------------------

By simply reading the code, it seems that LocalDirAllocator does fail over to a 
different directory  when a tmp directory is not writable when allocating a new 
directory. But if the allocated directory becomes not writable while data is 
writing, dfs does not handle the error. 

Christian, could you please publish the failure stack trace? So I am able to 
pinpoint the error. Thanks.

> multiple dfs.client.buffer.dir directories are not treated as alternatives
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-1875
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1875
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>            Reporter: Christian Kunz
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.15.0
>
>
> When specifying multiple directories in the value for dfs.client.buffer.dir, 
> jobs fail when the selected directory does not exist or is not writable. 
> Correct behaviour should be to create the directory when it does not exist 
> and fail over to an alternative directory when it is not writable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to