[ 
https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621073#comment-17621073
 ] 

Zbigniew Kostrzewa commented on HADOOP-15915:
---------------------------------------------

I've recently stumbled upon this with {{{}3.2.2{}}}. For me the problem was 
that I did not change {{hadoop.tmp.dir}} and so the {{s3ablock-0001-}} were 
created in {{/tmp/hadoop-<HADOOP_USER_NAME>/s3a}} directory. At the same time, 
on CentOS 7 in my case, there is a systemd service 
{{systemd-tmpfiles-clean.service}} run once a day which cleans up {{/tmp}} of 
files and directories older than 10 days. However, Node Manager after it caches 
that {{/tmp/hadoop-<HADOOP_USER_NAME>/s3a}} exists it does not re-check it and 
does not re-create that directory if it no longer exists, I believe the code 
responsible for this is:
{code:java}
  /** This method gets called everytime before any read/write to make sure
     * that any change to localDirs is reflected immediately.
     */
    private Context confChanged(Configuration conf)
        throws IOException {
      ...
      if (!newLocalDirs.equals(ctx.savedLocalDirs)) { {code}
and when the directory is missing log aggregation fails with this 
{{DiskChecker}} error.

> Report problems w/ local S3A buffer directory meaningfully
> ----------------------------------------------------------
>
>                 Key: HADOOP-15915
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15915
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.1
>            Reporter: Steve Loughran
>            Priority: Major
>
> When there's a problem working with the temp directory used for block output 
> and the staging committers the actual path (and indeed config option) aren't 
> printed. 
> Improvements: tell the user which directory isn't writeable



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to