[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680365#comment-16680365
 ] 

Steve Loughran commented on HADOOP-15915:
-----------------------------------------

{code}
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid 
local directory for s3ablock-0001-
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:447)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
at 
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
{code}

> Report problems w/ local S3A buffer directory meaningfully
> ----------------------------------------------------------
>
>                 Key: HADOOP-15915
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15915
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.1
>            Reporter: Steve Loughran
>            Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to