This was a very odd error - it turns out that i had created a file, called
"tmp" in my fs root directory, which meant that
when the jobs were trying to write to the tmp directory, they ran into the
not-a-dir exception.

In any case, I think the error reporting in NativeIO class should be
revised.

On Thu, Jul 11, 2013 at 10:24 PM, Devaraj k <devara...@huawei.com> wrote:

>  Hi Jay,****
>
> ** **
>
>    Here client is trying to create a staging directory in local file
> system,  which actually should create in HDFS.****
>
> ** **
>
> Could you check whether do you have configured “fs.defaultFS”
> configuration in client with the HDFS.****
>
>     ****
>
> ** **
>
> Thanks****
>
> Devaraj k****
>
> ** **
>
> *From:* Jay Vyas [mailto:jayunit...@gmail.com]
> *Sent:* 12 July 2013 04:12
> *To:* common-u...@hadoop.apache.org
> *Subject:* Staging directory ENOTDIR error.****
>
> ** **
>
> Hi , I'm getting an ungoogleable exception, never seen this before. ****
>
> This is on a hadoop 1.1. cluster... It appears that its permissions
> related... ****
>
> Any thoughts as to how this could crop up?****
>
> I assume its a bug in my filesystem, but not sure.****
>
>
> 13/07/11 18:39:43 ERROR security.UserGroupInformation:
> PriviledgedActionException as:root cause:ENOTDIR: Not a directory
> ENOTDIR: Not a directory
>     at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
>     at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699)
>     at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654)
>     at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
>     at
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
>     at
> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
>     at
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
>
> ****
>
>
> --
> Jay Vyas
> http://jayunit100.blogspot.com ****
>



-- 
Jay Vyas
http://jayunit100.blogspot.com

Reply via email to