Re: No space left on device

2012-05-28 Thread yingnan.ma
ok,I find it. the jobtracker server is full.


2012-05-28 



yingnan.ma 



发件人: yingnan.ma 
发送时间: 2012-05-28  13:01:56 
收件人: common-user 
抄送: 
主题: No space left on device 
 
Hi,
I encounter a problem as following:
 Error - Job initialization failed:
org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device
 at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:201)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:348)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at 
org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1344)
..
So, I think that the HDFS is full or something, but I cannot find a way to 
address the problem, if you had some suggestion, Please show me , thank you.
Best Regards


Re: No space left on device

2012-05-28 Thread Marcos Ortiz

Do you have the JT and NN on the same node?
Look here on the Lars Francke´s post:
http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html
This is a very schema how to install Hadoop, and look the configuration 
that he used for the name and data directories.
If this directories are in the same disk, and you don´t have enough 
space for it, you can find that exception.


My recomendation is to divide these directories in separate discs with a 
very similar schema to the Lars´s configuration

Another recomendation is to check the Hadoop´s logs. Read about this here:
http://www.cloudera.com/blog/2010/11/hadoop-log-location-and-retention/

regards

On 05/28/2012 02:20 AM, yingnan.ma wrote:

ok,I find it. the jobtracker server is full.


2012-05-28



yingnan.ma



发件人: yingnan.ma
发送时间: 2012-05-28  13:01:56
收件人: common-user
抄送:
主题: No space left on device

Hi,
I encounter a problem as following:
  Error - Job initialization failed:
org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device
  at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:201)
 at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
 at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
 at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
 at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
 at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
 at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:348)
 at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
 at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
 at 
org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1344)
 ..
So, I think that the HDFS is full or something, but I cannot find a way to 
address the problem, if you had some suggestion, Please show me , thank you.
Best Regards


--
Marcos Luis Ortíz Valmaseda
 Data Engineer  Sr. System Administrator at UCI
 http://marcosluis2186.posterous.com
 http://www.linkedin.com/in/marcosluis2186
 Twitter: @marcosluis2186


10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci


Re: No Space left on device

2012-04-26 Thread Sidney Simmons
Might be worth checking your inode usage as lack of inodes will result in the 
same error

HTH




On 26 Apr 2012, at 07:01, Harsh J ha...@cloudera.com wrote:

 The transient Map-Reduce files do not go to the DFS, but rather onto
 the local filesystem directories specified by the mapred.local.dir
 parameter. If you expand this configuration to be similar to
 dfs.data.dir (as your DataNode may be carrying), then it will get
 more space/disks to do its work.
 
 See this very recent conversation for more information:
 http://search-hadoop.com/m/DWbsZ1m0Ttx
 
 On Thu, Apr 26, 2012 at 1:24 AM, Nuthalapati, Ramesh
 ramesh.nuthalap...@mtvstaff.com wrote:
 Harsh -
 
 Even if it's the case, my free tmp memory is more than the dfs used - Isn't 
 it ?
 
 Configured Capacity: 116258406400 (108.27 GB)
 Present Capacity: 110155911168 (102.59 GB)
 DFS Remaining: 101976682496 (94.97 GB)
 DFS Used: 8179228672 (7.62 GB)
 DFS Used%: 7.43%
 Under replicated blocks: 0
 Blocks with corrupt replicas: 0
 Missing blocks: 0
 
 -
 Datanodes available: 1 (1 total, 0 dead)
 
 Name: 172.17.7.83:50010
 Decommission Status : Normal
 Configured Capacity: 116258406400 (108.27 GB)
 DFS Used: 8179228672 (7.62 GB)
 Non DFS Used: 6102495232 (5.68 GB)
 DFS Remaining: 101976682496(94.97 GB)
 DFS Used%: 7.04%
 DFS Remaining%: 87.72%
 Last contact: Wed Apr 25 12:52:19 PDT 2012
 
 Thanks !
 
 -Original Message-
 From: Harsh J [mailto:ha...@cloudera.com]
 Sent: Wednesday, April 25, 2012 3:42 PM
 To: common-user@hadoop.apache.org
 Subject: Re: No Space left on device
 
 Ramesh,
 
 That explains it then.
 
 Going from Map to Reduce requires disk storage worth at least the amount of 
 data you're gonna be sending between them. If you're running your 'cluster' 
 on a single machine, the answer to your question is yes.
 
 On Thu, Apr 26, 2012 at 1:01 AM, Nuthalapati, Ramesh 
 ramesh.nuthalap...@mtvstaff.com wrote:
 I have lot of space available
 
 FilesystemSize  Used Avail Use% Mounted on
 /dev/mapper/sysvg-opt
   14G  1.2G   12G   9% /opt
 
 My input files are around 10G, is there a requirement that the hadoop tmp 
 dir should be at certain % of the input files or something ?
 
 Thanks !
 
 -Original Message-
 From: Harsh J [mailto:ha...@cloudera.com]
 Sent: Wednesday, April 25, 2012 3:19 PM
 To: common-user@hadoop.apache.org
 Subject: Re: No Space left on device
 
 This is from your mapred.local.dir (which by default may reuse 
 hadoop.tmp.dir).
 
 Do you see free space available when you do the following?:
 df -h /opt/hadoop
 
 On Thu, Apr 26, 2012 at 12:43 AM, Nuthalapati, Ramesh 
 ramesh.nuthalap...@mtvstaff.com wrote:
 Strangely isee the tmp folder has enough space. What else could be the 
 problem ? How much should my tmp space be ?
 
 
 Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at
 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write
 (
 RawLocalFileSystem.java:190)
at
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65
 )
at
 java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOut
 p
 utStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at
 org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.ja
 v
 a:84)
at
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOut
 p
 utStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at
 org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
at
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Re
 d
 uceTask.java:2454)
 
 java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The
 reduce copier failed
at
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
 Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException:
 Could not find any valid local directory for
 file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/
 j
 ob_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_
 1
 22.out
at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPa
 t
 hForWrite(LocalDirAllocator.java:343)
at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirA
 l
 locator.java:124)
at
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Re
 d
 uceTask.java:2434)
 
 
 
 
 
 --
 Harsh J
 
 
 
 --
 Harsh J
 
 
 
 -- 
 Harsh J


Re: No Space left on device

2012-04-26 Thread JunYong Li
maybe exists file hole, are -sh and du -sch /tmp results same?


Re: No Space left on device

2012-04-26 Thread Chris Curtin
Look at how many job_ directories there are on your slave nodes. We're
using Cloudera so they are under the 'userlogs' directory, not sure on
'pure' Apache where they are.

As we approach 30k we see this.(we run a monthly report does 10s of
thousands of jobs in a few days) We've tried tuning the # of jobs stored in
the history on the jobtracker but it doesn't always help. So we have an
hourly cron job that finds any files older than 4 hours in that directory
and removes them. None of our individual jobs runs for more than 30
minutes, so waiting 4 hours and blowing them away hasn't caused us any
problems.



On Thu, Apr 26, 2012 at 5:17 AM, JunYong Li lij...@gmail.com wrote:

 maybe exists file hole, are -sh and du -sch /tmp results same?



No Space left on device

2012-04-25 Thread Nuthalapati, Ramesh
Strangely isee the tmp folder has enough space. What else could be the problem 
? How much should my tmp space be ?


Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:190)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at 
org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.java:84)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2454)

java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The reduce 
copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not 
find any valid local directory for 
file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/job_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_122.out
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:343)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2434)




Re: No Space left on device

2012-04-25 Thread Alexander Lorenz
looks like the hadoop partition is full. 

sent via my mobile device

On Apr 25, 2012, at 9:13 PM, Nuthalapati, Ramesh 
ramesh.nuthalap...@mtvstaff.com wrote:

 Strangely isee the tmp folder has enough space. What else could be the 
 problem ? How much should my tmp space be ?
 
 
 Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at 
 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:190)
at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at 
 org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.java:84)
at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2454)
 
 java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The reduce 
 copier failed
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
 Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not 
 find any valid local directory for 
 file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/job_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_122.out
at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:343)
at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2434)
 
 


Re: No Space left on device

2012-04-25 Thread Harsh J
This is from your mapred.local.dir (which by default may reuse hadoop.tmp.dir).

Do you see free space available when you do the following?:
df -h /opt/hadoop

On Thu, Apr 26, 2012 at 12:43 AM, Nuthalapati, Ramesh
ramesh.nuthalap...@mtvstaff.com wrote:
 Strangely isee the tmp folder has enough space. What else could be the 
 problem ? How much should my tmp space be ?


 Error: java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at 
 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:190)
        at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
        at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at 
 org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.java:84)
        at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
        at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
        at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2454)

 java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The reduce 
 copier failed
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
 Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not 
 find any valid local directory for 
 file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/job_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_122.out
        at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:343)
        at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
        at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2434)





-- 
Harsh J


RE: No Space left on device

2012-04-25 Thread Nuthalapati, Ramesh
I have lot of space available

FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/sysvg-opt
   14G  1.2G   12G   9% /opt

My input files are around 10G, is there a requirement that the hadoop tmp dir 
should be at certain % of the input files or something ?

Thanks !

-Original Message-
From: Harsh J [mailto:ha...@cloudera.com] 
Sent: Wednesday, April 25, 2012 3:19 PM
To: common-user@hadoop.apache.org
Subject: Re: No Space left on device

This is from your mapred.local.dir (which by default may reuse hadoop.tmp.dir).

Do you see free space available when you do the following?:
df -h /opt/hadoop

On Thu, Apr 26, 2012 at 12:43 AM, Nuthalapati, Ramesh 
ramesh.nuthalap...@mtvstaff.com wrote:
 Strangely isee the tmp folder has enough space. What else could be the 
 problem ? How much should my tmp space be ?


 Error: java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at 
 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(
 RawLocalFileSystem.java:190)
        at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at 
 java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
        at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
 utStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at 
 org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.jav
 a:84)
        at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
 utStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
        at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
        at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
 uceTask.java:2454)

 java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The 
 reduce copier failed
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
 Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: 
 Could not find any valid local directory for 
 file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/j
 ob_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_1
 22.out
        at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPat
 hForWrite(LocalDirAllocator.java:343)
        at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAl
 locator.java:124)
        at 
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
 uceTask.java:2434)





--
Harsh J


Re: No Space left on device

2012-04-25 Thread Harsh J
Ramesh,

That explains it then.

Going from Map to Reduce requires disk storage worth at least the
amount of data you're gonna be sending between them. If you're running
your 'cluster' on a single machine, the answer to your question is
yes.

On Thu, Apr 26, 2012 at 1:01 AM, Nuthalapati, Ramesh
ramesh.nuthalap...@mtvstaff.com wrote:
 I have lot of space available

 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/sysvg-opt
                       14G  1.2G   12G   9% /opt

 My input files are around 10G, is there a requirement that the hadoop tmp dir 
 should be at certain % of the input files or something ?

 Thanks !

 -Original Message-
 From: Harsh J [mailto:ha...@cloudera.com]
 Sent: Wednesday, April 25, 2012 3:19 PM
 To: common-user@hadoop.apache.org
 Subject: Re: No Space left on device

 This is from your mapred.local.dir (which by default may reuse 
 hadoop.tmp.dir).

 Do you see free space available when you do the following?:
 df -h /opt/hadoop

 On Thu, Apr 26, 2012 at 12:43 AM, Nuthalapati, Ramesh 
 ramesh.nuthalap...@mtvstaff.com wrote:
 Strangely isee the tmp folder has enough space. What else could be the 
 problem ? How much should my tmp space be ?


 Error: java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at
 org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(
 RawLocalFileSystem.java:190)
        at
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at
 java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
        at
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
 utStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at
 org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.jav
 a:84)
        at
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
 utStream.java:49)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
        at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
        at
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
 uceTask.java:2454)

 java.io.IOException: Task: attempt_201204240741_0003_r_00_1 - The
 reduce copier failed
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
        at org.apache.hadoop.mapred.Child.main(Child.java:170)
 Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException:
 Could not find any valid local directory for
 file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/j
 ob_201204240741_0003/attempt_201204240741_0003_r_00_1/output/map_1
 22.out
        at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPat
 hForWrite(LocalDirAllocator.java:343)
        at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAl
 locator.java:124)
        at
 org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
 uceTask.java:2434)





 --
 Harsh J



-- 
Harsh J


Re: RE: No Space Left On Device though space is available

2009-08-03 Thread Mathias Herberts
no quota on the fs?

On Aug 3, 2009 7:13 AM, Palleti, Pallavi pallavi.pall...@corp.aol.com
wrote:

No. These are production jobs which were working pretty fine and
suddenly, we started seeing these issues. And, if you see the error log,
the jobs are failing at the time of submission itself while copying the
application jar. And, when I see the client machine disk size and also
HDFS, it is only 60% full.

Thanks
Pallavi

-Original Message- From: prashant ullegaddi [mailto:
prashullega...@gmail.com] Sent: Monday...


No Space Left On Device though space is available

2009-08-02 Thread Pallavi Palleti
Hi all,

We are having a 60 node cluster running hadoop-0.18.2. We are seeing No Space 
Left On Device and the detailed error is 
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
java.lang.RuntimeException: javax.xml.transfor
m.TransformerException: java.io.IOException: No space left on device
at org.apache.hadoop.conf.Configuration.write(Configuration.java:996)
at 
org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:530)
at org.apache.hadoop.mapred.JobInProgress.init(JobInProgress.java:196)
at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1783)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)

at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.mapred.$Proxy1.submitJob(Unknown Source)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:788)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026)

Surprisingly, there is no space issue. Still, it is giving above error. Can 
someone kindly let me know what could be the issue?

Thanks
Pallavi


Re: No Space Left On Device though space is available

2009-08-02 Thread prashant ullegaddi
Are you using any space on local nodes? When we indexed 1TB on 8 nodes, we
were creating index on local file system and
then copying the same to DFS. It so happened that there wasn't any space
left. After that we started moving the index instead of copying it.
Everything worked fine.  Probably that could be a problem with your
application as well.

Thanks,
Prashant.

On Mon, Aug 3, 2009 at 10:05 AM, Pallavi Palleti 
pallavi.pall...@corp.aol.com wrote:

 Hi all,

 We are having a 60 node cluster running hadoop-0.18.2. We are seeing No
 Space Left On Device and the detailed error is
  org.apache.hadoop.ipc.RemoteException: java.io.IOException:
 java.lang.RuntimeException: javax.xml.transfor
 m.TransformerException: java.io.IOException: No space left on device
at
 org.apache.hadoop.conf.Configuration.write(Configuration.java:996)
at
 org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:530)
at
 org.apache.hadoop.mapred.JobInProgress.init(JobInProgress.java:196)
at
 org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1783)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)

at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.mapred.$Proxy1.submitJob(Unknown Source)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:788)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026)

 Surprisingly, there is no space issue. Still, it is giving above error. Can
 someone kindly let me know what could be the issue?

 Thanks
 Pallavi



RE: No Space Left On Device though space is available

2009-08-02 Thread Palleti, Pallavi
No. These are production jobs which were working pretty fine and
suddenly, we started seeing these issues. And, if you see the error log,
the jobs are failing at the time of submission itself while copying the
application jar. And, when I see the client machine disk size and also
HDFS, it is only 60% full.

Thanks
Pallavi

-Original Message-
From: prashant ullegaddi [mailto:prashullega...@gmail.com] 
Sent: Monday, August 03, 2009 10:10 AM
To: common-user@hadoop.apache.org
Subject: Re: No Space Left On Device though space is available

Are you using any space on local nodes? When we indexed 1TB on 8 nodes,
we
were creating index on local file system and
then copying the same to DFS. It so happened that there wasn't any space
left. After that we started moving the index instead of copying it.
Everything worked fine.  Probably that could be a problem with your
application as well.

Thanks,
Prashant.

On Mon, Aug 3, 2009 at 10:05 AM, Pallavi Palleti 
pallavi.pall...@corp.aol.com wrote:

 Hi all,

 We are having a 60 node cluster running hadoop-0.18.2. We are seeing
No
 Space Left On Device and the detailed error is
  org.apache.hadoop.ipc.RemoteException: java.io.IOException:
 java.lang.RuntimeException: javax.xml.transfor
 m.TransformerException: java.io.IOException: No space left on device
at
 org.apache.hadoop.conf.Configuration.write(Configuration.java:996)
at

org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java
:530)
at
 org.apache.hadoop.mapred.JobInProgress.init(JobInProgress.java:196)
at
 org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1783)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)

at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.mapred.$Proxy1.submitJob(Unknown Source)
at
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:788)
at
org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026)

 Surprisingly, there is no space issue. Still, it is giving above
error. Can
 someone kindly let me know what could be the issue?

 Thanks
 Pallavi