Go through the jobtracker, find the relevant node that handled 
attempt_201012061426_0001_m_000292_0 and figure out 

if there are FS or permssion problems.

Raj


________________________________
From: Adarsh Sharma <adarsh.sha...@orkash.com>
To: common-user@hadoop.apache.org
Sent: Wed, December 8, 2010 7:48:47 PM
Subject: Re: Reduce Error

Ted Yu wrote:
> Any chance mapred.local.dir is under /tmp and part of it got cleaned up ?
> 
> On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma <adarsh.sha...@orkash.com>wrote:
> 
>  
>> Dear all,
>> 
>> Did anyone encounter the below error while running job in Hadoop. It occurs
>> in the reduce phase of the job.
>> 
>> attempt_201012061426_0001_m_000292_0:
>> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
>> valid local directory for
>>taskTracker/jobcache/job_201012061426_0001/attempt_201012061426_0001_m_000292_0/output/file.out
>>t
>> 
>> It states that it is not able to locate a file that is created in
>>  mapred.local.dir of Hadoop.
>> 
>> Thanks in Advance for any sort of information regarding this.
>> 
>> Best Regards
>> 
>> Adarsh Sharma
>> 
>>    
> 
>  
Hi Ted,

My mapred.local.dir is in /home/hadoop directory. I also check it with in 
/hdd2-2 directory where  we have lots of space.

Would mapred.map.tasks affects.

I checked with default and also with 80 maps and 16 reduces as I have 8 slaves.


<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/mapred/local</value>
<description>The local directory where MapReduce stores intermediate
data files.  May be a comma-separated list of directories on different devices 
in order to spread disk i/o.
Directories that do not exist are ignored.
</description>
</property>

<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/mapred/system</value>
<description>The shared directory where MapReduce stores control files.
</description>
</property>

Any further information u want.


Thanks & Regards

Adarsh Sharma

Reply via email to