From: Brown, Berlin [GCG-PFS]
Sent: Saturday, August 13, 2011 2:00 AM
To: 'common-user@hadoop.apache.org'
Cc: 'berlin.br...@gmail.com'
Subject: RE: basic usage map/reduce error
OK, that wasn't the real error, it looks like this was:
When working with cygwin. I am
java\.io\.IOException: Task process exit with nonzero status
of 127\.
at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258)
From: Brown, Berlin [GCG-PFS]
Sent: Friday, August 12, 2011 3:49 PM
To: 'common-user@hadoop.apache.org'
Cc: be
I am getting this error with a mostly out of the box configuration from
version 0.20.203.0
When I try to run the wordcount examples.
$ hadoop jar hadoop-examples-0.20.203.0.jar wordcount
/user/hduser/gutenberg /user/hduser/gutenberg-output6
2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.
Any update on this error.
Thanks
Adarsh Sharma wrote:
Esteban Gutierrez Moguel wrote:
Adarsh,
Dou you have in /etc/hosts the hostnames for masters and slaves?
Yes I know this issue. But did you think the error occurs while
reading the output of map.
I want to know the proper reason
Esteban Gutierrez Moguel wrote:
Adarsh,
Dou you have in /etc/hosts the hostnames for masters and slaves?
Yes I know this issue. But did you think the error occurs while reading
the output of map.
I want to know the proper reason of below lines :
org.apache.hadoop.util.DiskChecker$DiskErr
Adarsh,
Dou you have in /etc/hosts the hostnames for masters and slaves?
esteban.
On Fri, Jan 7, 2011 at 06:47, Adarsh Sharma wrote:
> Dear all,
>
> I am researching about the below error and could not able to find the
> reason :
>
> Data Size : 3.4 GB
> Hadoop-0.20.0
>
> had...@ws32-test-lin:~
Dear all,
I am researching about the below error and could not able to find the
reason :
Data Size : 3.4 GB
Hadoop-0.20.0
had...@ws32-test-lin:~/project/hadoop-0.20.2$ bin/hadoop jar
hadoop-0.20.2-examples.jar wordcount /user/hadoop/page_content.txt
page_content_output.txt
11/01/07 16:11:14
figure out
if there are FS or permssion problems.
Raj
From: Adarsh Sharma
To: common-user@hadoop.apache.org
Sent: Wed, December 8, 2010 7:48:47 PM
Subject: Re: Reduce Error
Ted Yu wrote:
Any chance mapred.local.dir is under /tmp and part of it got
and figure out
>> if there are FS or permssion problems.
>>
>> Raj
>>
>>
>>
>> From: Adarsh Sharma
>> To: common-user@hadoop.apache.org
>> Sent: Wed, December 8, 2010 7:48:47 PM
>> Subject: Re: Reduce Err
Subject: Re: Reduce Error
Ted Yu wrote:
> Any chance mapred.local.dir is under /tmp and part of it got cleaned up ?
>
> On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote:
>
>
>> Dear all,
>>
>> Did anyone encounter the below error while running job in Hadoop. It
Ted Yu wrote:
Any chance mapred.local.dir is under /tmp and part of it got cleaned up ?
On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote:
Dear all,
Did anyone encounter the below error while running job in Hadoop. It occurs
in the reduce phase of the job.
attempt_201012061426_0001_m_00
Any chance mapred.local.dir is under /tmp and part of it got cleaned up ?
On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote:
> Dear all,
>
> Did anyone encounter the below error while running job in Hadoop. It occurs
> in the reduce phase of the job.
>
> attempt_201012061426_0001_m_000292_0:
>
Dear all,
Did anyone encounter the below error while running job in Hadoop. It
occurs in the reduce phase of the job.
attempt_201012061426_0001_m_000292_0:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
any valid local directory for
taskTracker/jobcache/job_2010120614
13 matches
Mail list logo