Re: Question about DFSClient "Could not obtain block" errors

2009-09-14 Thread Mafish Liu
Check if your datanodes are starting up correctly.
This error occurs when there are file entries in namenode. By it meets
problems to fetch file data from datanodes.

2009/9/15 scott w :
> I am trying to read data placed on hdfs in one EC2 cluster from a different
> EC2 cluster and am getting the errors below. Both EC2 Clusters are running
> v0.19. When I run 'hadoop -get small-page-index small-page-index' on the
> source cluster everything works fine and the data is properly retrieved out
> of hdfs. FWIW, hadoop fs -ls works fine across clusters. Any ideas of what
> might be the problem and how to remedy it?
>
> thanks,
> Scott
>
> Here are the errors I am getting:
>
> [r...@domu-12-31-38-00-4e-32 ~]# hadoop fs -cp
> hdfs://domU-12-31-38-00-1C-B1.compute-1.internal:50001/user/root/small-page-index
> small-page-index
> 09/09/14 21:48:43 INFO hdfs.DFSClient: Could not obtain block
> blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
> nodes contain current block
> 09/09/14 21:51:46 INFO hdfs.DFSClient: Could not obtain block
> blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
> nodes contain current block
> 09/09/14 21:54:49 INFO hdfs.DFSClient: Could not obtain block
> blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
> nodes contain current block
> Exception closing file /user/root/small-page-index/aIndex/_0.cfs
> java.io.IOException: Filesystem closed
>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>    at
> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>    at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>    at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1413)
>    at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:236)
>    at
> org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:221)
>



-- 
maf...@gmail.com


Question about DFSClient "Could not obtain block" errors

2009-09-14 Thread scott w
I am trying to read data placed on hdfs in one EC2 cluster from a different
EC2 cluster and am getting the errors below. Both EC2 Clusters are running
v0.19. When I run 'hadoop -get small-page-index small-page-index' on the
source cluster everything works fine and the data is properly retrieved out
of hdfs. FWIW, hadoop fs -ls works fine across clusters. Any ideas of what
might be the problem and how to remedy it?

thanks,
Scott

Here are the errors I am getting:

[r...@domu-12-31-38-00-4e-32 ~]# hadoop fs -cp
hdfs://domU-12-31-38-00-1C-B1.compute-1.internal:50001/user/root/small-page-index
small-page-index
09/09/14 21:48:43 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
nodes contain current block
09/09/14 21:51:46 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
nodes contain current block
09/09/14 21:54:49 INFO hdfs.DFSClient: Could not obtain block
blk_-4157273618194597760_1160 from any node:  java.io.IOException: No live
nodes contain current block
Exception closing file /user/root/small-page-index/aIndex/_0.cfs
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1413)
at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:236)
at
org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:221)