[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082066#comment-17082066
 ] 

Vasii Cosmin Radu commented on HADOOP-15915:
--------------------------------------------

I've encountered the same issue, Flink 1.10, EC2 deployment. Any ideas of how 
to fix this? The problem appears on Job Managers as soon as I submit a job. 
There is enough disk space and the Job Managers are not actually doing anything 
to need such much disk space, they are uploading job graph blobs.

> Report problems w/ local S3A buffer directory meaningfully
> ----------------------------------------------------------
>
>                 Key: HADOOP-15915
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15915
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.1
>            Reporter: Steve Loughran
>            Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to