[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully

2018-11-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680365#comment-16680365
 ] 

Steve Loughran commented on HADOOP-15915:
-

{code}
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid 
local directory for s3ablock-0001-
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:447)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
at 
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.(S3ABlockOutputStream.java:168)
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
{code}

> Report problems w/ local S3A buffer directory meaningfully
> --
>
> Key: HADOOP-15915
> URL: https://issues.apache.org/jira/browse/HADOOP-15915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully

2018-11-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680366#comment-16680366
 ] 

Steve Loughran commented on HADOOP-15915:
-

+add to troubleshooting doc

> Report problems w/ local S3A buffer directory meaningfully
> --
>
> Key: HADOOP-15915
> URL: https://issues.apache.org/jira/browse/HADOOP-15915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully

2020-04-12 Thread Vasii Cosmin Radu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082066#comment-17082066
 ] 

Vasii Cosmin Radu commented on HADOOP-15915:


I've encountered the same issue, Flink 1.10, EC2 deployment. Any ideas of how 
to fix this? The problem appears on Job Managers as soon as I submit a job. 
There is enough disk space and the Job Managers are not actually doing anything 
to need such much disk space, they are uploading job graph blobs.

> Report problems w/ local S3A buffer directory meaningfully
> --
>
> Key: HADOOP-15915
> URL: https://issues.apache.org/jira/browse/HADOOP-15915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully

2022-10-20 Thread Zbigniew Kostrzewa (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621073#comment-17621073
 ] 

Zbigniew Kostrzewa commented on HADOOP-15915:
-

I've recently stumbled upon this with {{{}3.2.2{}}}. For me the problem was 
that I did not change {{hadoop.tmp.dir}} and so the {{s3ablock-0001-}} were 
created in {{/tmp/hadoop-/s3a}} directory. At the same time, 
on CentOS 7 in my case, there is a systemd service 
{{systemd-tmpfiles-clean.service}} run once a day which cleans up {{/tmp}} of 
files and directories older than 10 days. However, Node Manager after it caches 
that {{/tmp/hadoop-/s3a}} exists it does not re-check it and 
does not re-create that directory if it no longer exists, I believe the code 
responsible for this is:
{code:java}
  /** This method gets called everytime before any read/write to make sure
     * that any change to localDirs is reflected immediately.
     */
    private Context confChanged(Configuration conf)
        throws IOException {
      ...
      if (!newLocalDirs.equals(ctx.savedLocalDirs)) { {code}
and when the directory is missing log aggregation fails with this 
{{DiskChecker}} error.

> Report problems w/ local S3A buffer directory meaningfully
> --
>
> Key: HADOOP-15915
> URL: https://issues.apache.org/jira/browse/HADOOP-15915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully

2020-04-09 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079003#comment-17079003
 ] 

Robert Metzger commented on HADOOP-15915:
-

+1 for improving the debugging experience here: 
https://lists.apache.org/thread.html/ref2db6f2367fbab60c8f54c6c3ae2c59ab82ddf602fe2046dac21822%40%3Cuser.flink.apache.org%3E

> Report problems w/ local S3A buffer directory meaningfully
> --
>
> Key: HADOOP-15915
> URL: https://issues.apache.org/jira/browse/HADOOP-15915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org