On 06/19/12 23:10, Ellis H. Wilson III wrote:
On 06/19/12 20:42, Raj Vishwanathan wrote:
You are probably having a very low somaxconn parameter ( default
centos has it at 128 , if I remember correctly). You can check the
value under /proc/sys/net/core/somaxconn
Aha! Excellent, it does seem
Hi all,
This is my first email to the list, so feel free to be candid in your
complaints if I'm doing something canonically uncouth in my requests for
assistance.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit
ethernet and each having solely a single hard disk. I am
Take at look at slide 25:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
It describes a similar error so hopefully this will help you.
~ Minh
On Tue, Jun 19, 2012 at 10:27 AM, Ellis H. Wilson III el...@cse.psu.edu wrote:
Hi all,
This is my first email to
Replies/more questions inline.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit ethernet
and each having solely a single hard disk. I am getting the following error
repeatably for the TeraSort benchmark. TeraGen runs without error, but
TeraSort runs predictably until
On 06/19/12 14:11, Minh Duc Nguyen wrote:
Take at look at slide 25:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
It describes a similar error so hopefully this will help you.
I appreciate your prompt response Minh, but as you will notice in the
end of my
On 06/19/12 13:38, Vinod Kumar Vavilapalli wrote:
Replies/more questions inline.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit ethernet and
each having solely a single hard disk. I am getting the following error
repeatably for the TeraSort benchmark. TeraGen runs
. Wilson III el...@cse.psu.edu
To: common-user@hadoop.apache.org
Sent: Tuesday, June 19, 2012 12:32 PM
Subject: Re: Error: Too Many Fetch Failures
On 06/19/12 13:38, Vinod Kumar Vavilapalli wrote:
Replies/more questions inline.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit
Hello,
I am new to hadoop.
I am using hadoop 0.20.2 on ubuntu.
I recently installed and configured hadoop using the available tutorials on
internet.
My hadoop is running properly.
But Whenever I am trying to run a wordcount example, the wordcount program
got stuck at the reduce part. After long
Hello Praveenesh,
On Thu, Apr 14, 2011 at 3:42 PM, praveenesh kumar praveen...@gmail.com wrote:
attempt_201104142306_0001_m_00_0, Status : FAILED
Too many fetch-failures
11/04/14 23:32:50 WARN mapred.JobClient: Error reading task outputInvalid
argument or cannot assign requested address
Hi,
From where I can see the logs ?
I have done single node cluster installaiton and I am running hadoop on
single machine only. Both Map and Reduce are running on same machine.
Thanks,
Praveenesh
On Thu, Apr 14, 2011 at 4:43 PM, Harsh J ha...@cloudera.com wrote:
Hello Praveenesh,
On Thu, Apr
10 matches
Mail list logo