Ramesh,

That explains it then.

Going from Map to Reduce requires disk storage worth at least the
amount of data you're gonna be sending between them. If you're running
your 'cluster' on a single machine, the answer to your question is
yes.

On Thu, Apr 26, 2012 at 1:01 AM, Nuthalapati, Ramesh
<ramesh.nuthalap...@mtvstaff.com> wrote:
> I have lot of space available
>
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/sysvg-opt
>                       14G  1.2G   12G   9% /opt
>
> My input files are around 10G, is there a requirement that the hadoop tmp dir 
> should be at certain % of the input files or something ?
>
> Thanks !
>
> -----Original Message-----
> From: Harsh J [mailto:ha...@cloudera.com]
> Sent: Wednesday, April 25, 2012 3:19 PM
> To: common-user@hadoop.apache.org
> Subject: Re: No Space left on device
>
> This is from your mapred.local.dir (which by default may reuse 
> hadoop.tmp.dir).
>
> Do you see free space available when you do the following?:
> df -h /opt/hadoop
>
> On Thu, Apr 26, 2012 at 12:43 AM, Nuthalapati, Ramesh 
> <ramesh.nuthalap...@mtvstaff.com> wrote:
>> Strangely isee the tmp folder has enough space. What else could be the 
>> problem ? How much should my tmp space be ?
>>
>>
>> Error: java.io.IOException: No space left on device
>>        at java.io.FileOutputStream.writeBytes(Native Method)
>>        at java.io.FileOutputStream.write(FileOutputStream.java:260)
>>        at
>> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(
>> RawLocalFileSystem.java:190)
>>        at
>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>        at
>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
>>        at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
>> utStream.java:49)
>>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>        at
>> org.apache.hadoop.mapred.IFileOutputStream.write(IFileOutputStream.jav
>> a:84)
>>        at
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutp
>> utStream.java:49)
>>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>        at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:218)
>>        at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:157)
>>        at
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
>> uceTask.java:2454)
>>
>> java.io.IOException: Task: attempt_201204240741_0003_r_000000_1 - The
>> reduce copier failed
>>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
>>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException:
>> Could not find any valid local directory for
>> file:/opt/hadoop/tmp/hadoop-hadoop/mapred/local/taskTracker/jobcache/j
>> ob_201204240741_0003/attempt_201204240741_0003_r_000000_1/output/map_1
>> 22.out
>>        at
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPat
>> hForWrite(LocalDirAllocator.java:343)
>>        at
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAl
>> locator.java:124)
>>        at
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(Red
>> uceTask.java:2434)
>>
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Reply via email to