RE: basic usage map/reduce error

2011-08-13 Thread Brown, Berlin [GCG-PFS]
From: Brown, Berlin [GCG-PFS] Sent: Saturday, August 13, 2011 2:00 AM To: 'common-user@hadoop.apache.org' Cc: 'berlin.br...@gmail.com' Subject: RE: basic usage map/reduce error OK, that wasn't the real error, it looks like this was: When working with cygwin. I am

RE: basic usage map/reduce error

2011-08-12 Thread Brown, Berlin [GCG-PFS]
java\.io\.IOException: Task process exit with nonzero status of 127\. at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258) From: Brown, Berlin [GCG-PFS] Sent: Friday, August 12, 2011 3:49 PM To: 'common-user@hadoop.apache.org' Cc: be

basic usage map/reduce error

2011-08-12 Thread Brown, Berlin [GCG-PFS]
I am getting this error with a mostly out of the box configuration from version 0.20.203.0 When I try to run the wordcount examples. $ hadoop jar hadoop-examples-0.20.203.0.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output6 2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.

Re: Too-many fetch failure Reduce Error

2011-01-11 Thread Adarsh Sharma
Any update on this error. Thanks Adarsh Sharma wrote: Esteban Gutierrez Moguel wrote: Adarsh, Dou you have in /etc/hosts the hostnames for masters and slaves? Yes I know this issue. But did you think the error occurs while reading the output of map. I want to know the proper reason

Re: Too-many fetch failure Reduce Error

2011-01-09 Thread Adarsh Sharma
Esteban Gutierrez Moguel wrote: Adarsh, Dou you have in /etc/hosts the hostnames for masters and slaves? Yes I know this issue. But did you think the error occurs while reading the output of map. I want to know the proper reason of below lines : org.apache.hadoop.util.DiskChecker$DiskErr

Re: Too-many fetch failure Reduce Error

2011-01-07 Thread Esteban Gutierrez Moguel
Adarsh, Dou you have in /etc/hosts the hostnames for masters and slaves? esteban. On Fri, Jan 7, 2011 at 06:47, Adarsh Sharma wrote: > Dear all, > > I am researching about the below error and could not able to find the > reason : > > Data Size : 3.4 GB > Hadoop-0.20.0 > > had...@ws32-test-lin:~

Too-many fetch failure Reduce Error

2011-01-07 Thread Adarsh Sharma
Dear all, I am researching about the below error and could not able to find the reason : Data Size : 3.4 GB Hadoop-0.20.0 had...@ws32-test-lin:~/project/hadoop-0.20.2$ bin/hadoop jar hadoop-0.20.2-examples.jar wordcount /user/hadoop/page_content.txt page_content_output.txt 11/01/07 16:11:14

Re: Reduce Error

2010-12-09 Thread Adarsh Sharma
figure out if there are FS or permssion problems. Raj From: Adarsh Sharma To: common-user@hadoop.apache.org Sent: Wed, December 8, 2010 7:48:47 PM Subject: Re: Reduce Error Ted Yu wrote: Any chance mapred.local.dir is under /tmp and part of it got

Re: Reduce Error

2010-12-08 Thread Ted Yu
and figure out >> if there are FS or permssion problems. >> >> Raj >> >> >> >> From: Adarsh Sharma >> To: common-user@hadoop.apache.org >> Sent: Wed, December 8, 2010 7:48:47 PM >> Subject: Re: Reduce Err

Re: Reduce Error

2010-12-08 Thread Raj V
Subject: Re: Reduce Error Ted Yu wrote: > Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? > > On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: > >  >> Dear all, >> >> Did anyone encounter the below error while running job in Hadoop. It

Re: Reduce Error

2010-12-08 Thread Adarsh Sharma
Ted Yu wrote: Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: Dear all, Did anyone encounter the below error while running job in Hadoop. It occurs in the reduce phase of the job. attempt_201012061426_0001_m_00

Re: Reduce Error

2010-12-08 Thread Ted Yu
Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: > Dear all, > > Did anyone encounter the below error while running job in Hadoop. It occurs > in the reduce phase of the job. > > attempt_201012061426_0001_m_000292_0: >

Reduce Error

2010-12-08 Thread Adarsh Sharma
Dear all, Did anyone encounter the below error while running job in Hadoop. It occurs in the reduce phase of the job. attempt_201012061426_0001_m_000292_0: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for taskTracker/jobcache/job_2010120614