Re: HDFS Errors

2010-06-22 Thread Steve Lewis
No I have been using it for about two weeks and have many dozen files but it may be close to full On Jun 22, 2010 3:14 PM, "Allen Wittenauer" wrote: On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote: > train...@hadoop1:~$ hadoop dfsadmin -safemode ge... OK, so you are out of safemode. > > train

Re: HDFS Errors

2010-06-22 Thread Allen Wittenauer
On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote: > train...@hadoop1:~$ hadoop dfsadmin -safemode get > Safe mode is OFF OK, so you are out of safemode. > > train...@hadoop1:~$ hadoop dfsadmin -refreshNodes This just re-reads the list of nodes. hadoop dfsadmin -report might be more useful.

Re: HDFS Errors

2010-06-22 Thread Steve Lewis
train...@hadoop1:~$ hadoop dfsadmin -safemode get Safe mode is OFF train...@hadoop1:~$ hadoop dfsadmin -refreshNodes train...@hadoop1:~$ hadoop fs -copyFromLocal small_yeast /user/training/small_yeast ^CcopyFromLocal: Filesystem closed with 1 file copied then the same error On Tue, Jun 22, 2010

Re: HDFS Errors

2010-06-22 Thread Allen Wittenauer
On Jun 22, 2010, at 12:55 PM, Steve Lewis wrote: > /user/training/small_yeast/yeast_chrXIV0006.sam.gz could only be > replicated to 0 nodes, instead of 1 ... almost always means the namenode doesn't think it has any viable datanodes (anymore). > Anyone seen this and know how to fix it > I

HDFS Errors

2010-06-22 Thread Steve Lewis
when I say hadoop fs -copyFromLocal small_yeast /user/training/small_yeast I get org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/training/small_yeast/yeast_chrXIV0006.sam.gz could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namen