: Datanode error
Could also be due to network issues. Number of sockets could be less or number
of threads could be less.
Raj
From: Harsh J ha...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Friday, July 20, 2012 9:06 AM
Subject: Re: Datanode error
Pablo
Pablo,
Perhaps you've forgotten about it but you'd ask the same question last
week and you did have some responses on it. Please see your earlier
thread at http://search-hadoop.com/m/0BOOh17ugmD
On Mon, Jul 23, 2012 at 7:27 PM, Pablo Musa pa...@psafe.com wrote:
Hey guys,
I have a cluster with
2012 11:07
To: common-user@hadoop.apache.org
Subject: Re: Datanode error
Pablo,
Perhaps you've forgotten about it but you'd ask the same question last week and
you did have some responses on it. Please see your earlier thread at
http://search-hadoop.com/m/0BOOh17ugmD
On Mon, Jul 23, 2012 at 7
Hey guys,
I have a cluster with 11 nodes (1 NN and 10 DNs) which is running and working.
However my datanodes keep having the same errors, over and over.
I googled the problems and tried different flags (ex:
-XX:MaxDirectMemorySize=2G)
and different configs (xceivers=8192) but could not solve
Hi Pablo,
Are you sure that Hadoop 0.20.2 is supported on Java 1.7? (AFAIK it's Java
1.6)
Thanks,
Anil
On Fri, Jul 20, 2012 at 6:07 AM, Pablo Musa pa...@psafe.com wrote:
Hey guys,
I have a cluster with 11 nodes (1 NN and 10 DNs) which is running and
working.
However my datanodes keep
Pablo,
These all seem to be timeouts from clients when they wish to read a
block and drops from clients when they try to write a block. I
wouldn't think of them as critical errors. Aside of being worried that
a DN is logging these, are you noticing any usability issue in your
cluster? If not, I'd
Could also be due to network issues. Number of sockets could be less or number
of threads could be less.
Raj
From: Harsh J ha...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Friday, July 20, 2012 9:06 AM
Subject: Re: Datanode error
Pablo