AvatarDataNode Error

2012-02-14 Thread bourne1900
Hi,all.
When start a AvatarDataNode,show error below:
2012-02-14 17:33:50,719 ERROR 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode: 
java.lang.IllegalArgumentException: not a proxy instance
at java.lang.reflect.Proxy.getInvocationHandler(Proxy.java:637)
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:393)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:603)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.shutdown(AvatarDataNode.java:576)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:218)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.init(AvatarDataNode.java:119)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.makeInstance(AvatarDataNode.java:691)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.instantiateDataNode(AvatarDataNode.java:715)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.createDataNode(AvatarDataNode.java:720)
at 
org.apache.hadoop.hdfs.server.datanode.AvatarDataNode.main(AvatarDataNode.java:728)
Does anybody know the reason?
Thank you,
Bourne

Re: Re: DN limit

2011-12-25 Thread bourne1900
Hi,
The replica of block is 1.
Threre is 150million block in NN web UI.




Bourne

发件人: Harsh J
发送时间: 2011年12月24日(星期六) 下午2:09
收件人: common-user
主题: Re: Re: DN limit
Bourne,

You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up as in the live
nodes list of the NN web UI?

2011/12/23 bourne1900 bourne1...@yahoo.cn:
 Sorry, a detailed description:
 I wanna know how many files a datanode can hold, so there is only one 
 datanode in my cluster.
 When the datanode save 14million files, the cluster can't work, and the 
 datanode has used all of it's MEM(32G), the namenode's MEM is OK.




 Bourne

 Sender: Adrian Liu
 Date: 2011年12月23日(星期五) 上午10:47
 To: common-user@hadoop.apache.org
 Subject: Re: DN limit
 In my understanding, the max number of files stored in the HDFS should be 
 MEM of namenode/sizeof(inode struct).   This max number of HDFS files 
 should be no smaller than max files a datanode can hold.

 Please feel free to correct me because I'm just beginning learning hadoop.

 On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

 Hi all,
 How many files a datanode can hold?
 In my test case, when a datanode save 14million files, the cluster can't 
 work.




 Bourne

 Adrian Liu
 adri...@yahoo-inc.com



-- 
Harsh J

Re: Re: DN limit

2011-12-22 Thread bourne1900
Sorry, a detailed description:
I wanna know how many files a datanode can hold, so there is only one datanode 
in my cluster.
When the datanode save 14million files, the cluster can't work, and the 
datanode has used all of it's MEM(32G), the namenode's MEM is OK.




Bourne

Sender: Adrian Liu
Date: 2011年12月23日(星期五) 上午10:47
To: common-user@hadoop.apache.org
Subject: Re: DN limit
In my understanding, the max number of files stored in the HDFS should be MEM 
of namenode/sizeof(inode struct).   This max number of HDFS files should be no 
smaller than max files a datanode can hold.

Please feel free to correct me because I'm just beginning learning hadoop.

On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

 Hi all,
 How many files a datanode can hold?
 In my test case, when a datanode save 14million files, the cluster can't work.
 
 
 
 
 Bourne

Adrian Liu
adri...@yahoo-inc.com

could not complete file...

2011-10-18 Thread bourne1900
Hi,

There are 20 threads which put file into HDFS ceaseless, every file is 2k.
When 1 million files have finished, client begin throw coulod not complete 
file exception  ceaseless.
At that time, datanode is hang-up.

I think maybe heart beat is lost, so namenode does not know the state of 
datanode. But I do not know why heart beat have lost. Is there any info can be 
found from log when datanode can not send heart beat?  

Thanks and regards!
bourne

Re: Re: could not complete file...

2011-10-18 Thread bourne1900
Thank you for your reply.

There is PIPE ERROR in datanode log, and nothing else. 
Client only shows Could not complete file ceaselessly.

From namonodeIP:50070/dfshealth.jsp , I found the datanode is hang-up, and 
there is only a datanode in my cluster :)

BTW, the retry times is unlimit I think, my hadoop version is 0.20.2, the 
DataNode.java is

while (!fileComplete) {
  fileComplete = namenode.complete(src, clientName);
  if (!fileComplete) {
try {
  Thread.sleep(400);
  if (System.currentTimeMillis() - localstart  5000) {
LOG.info(Could not complete file  + src +  retrying...);
  }
} catch (InterruptedException ie) {
}
  }
}


bourne1900 

Sender: Uma Maheswara Rao G 72686
Date: 2011年10月18日(星期二) 下午6:00
To: common-user
CC: common-user
Subject: Re: could not complete file...
- Original Message -
From: bourne1900 bourne1...@yahoo.cn
Date: Tuesday, October 18, 2011 3:21 pm
Subject: could not complete file...
To: common-user common-user@hadoop.apache.org

 Hi,
 
 There are 20 threads which put file into HDFS ceaseless, every 
 file is 2k.
 When 1 million files have finished, client begin throw coulod not 
 complete file exception  ceaseless.
 Could not complete file log is actually info log. This will be logged from 
client when closing the file. It will retry for some time (i remember 100 
times) to ensure the suuceefull writes.
Did you observe any write failures here?

 At that time, datanode is hang-up.
 
 I think maybe heart beat is lost, so namenode does not know the 
 state of datanode. But I do not know why heart beat have lost. Is 
 there any info can be found from log when datanode can not send 
 heart beat?
Can you check the NN UI to verify the number of live nodes. By this we can 
decide whether DN stopped sending heartbeats or not.  
 
 Thanks and regards!
 bourne

Regards,
Uma