4 slaves, 1 master, all are the m1.xlarge instance type. 

Richard J. Zak

-----Original Message-----
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of
Jean-Daniel Cryans
Sent: Friday, January 23, 2009 12:34
To: core-user@hadoop.apache.org
Subject: Re: HDFS loosing blocks or connection error

Richard,

This happens when the datanodes are too slow and eventually all replicas for
a single block are tagged as "bad".  What kind of instances are you using?
How many of them?

J-D

On Fri, Jan 23, 2009 at 12:13 PM, Zak, Richard [USA]
<zak_rich...@bah.com>wrote:

>  Might there be a reason for why this seems to routinely happen to me 
> when using Hadoop 0.19.0 on Amazon EC2?
>
> 09/01/23 11:45:52 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No 
> live nodes contain current block
> 09/01/23 11:45:55 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No 
> live nodes contain current block
> 09/01/23 11:45:58 INFO hdfs.DFSClient: Could not obtain block
> blk_-1757733438820764312_6736 from any node:  java.io.IOException: No 
> live nodes contain current block
> 09/01/23 11:46:01 WARN hdfs.DFSClient: DFS Read: java.io.IOException: 
> Could not obtain block: blk_-1757733438820764312_6736 file=/stats.txt 
> It seems hdfs isn't so robust or reliable as the website says and/or I 
> have a configuration issue.
>
>
>  Richard J. Zak
>

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to