On 05/31/2011 10:06 AM, Xu, Richard wrote:

1 namenode, 1 datanode. Dfs.replication=3. We also tried 0, 1, 2, same result.

*From:*Yaozhen Pan [mailto:itzhak....@gmail.com]
*Sent:* Tuesday, May 31, 2011 10:34 AM
*To:* hdfs-user@hadoop.apache.org
*Subject:* Re: Unable to start hadoop-0.20.2 but able to start hadoop-0.20.203 cluster

How many datanodes are in your cluster? and what is the value of "dfs.replication" in hdfs-site.xml (if not specified, default value is 3)?

From the error log, it seems there are not enough datanodes to replicate the files in hdfs.

    在 2011 5 31 22:23,"Harsh J" <ha...@cloudera.com
    <mailto:ha...@cloudera.com>>写道:
    Xu,

    Please post the output of `hadoop dfsadmin -report` and attach the
    tail of a started DN's log?


    On Tue, May 31, 2011 at 7:44 PM, Xu, Richard <richard...@citi.com
    <mailto:richard...@citi.com>> wrote:
    > 2. Also, Configured Cap...

    This might easily be the cause. I'm not sure if its a Solaris thing
    that can lead to this though.


    > 3. in datanode server, no error in logs, but tasktracker logs has
    the following suspicious thing:...

    I don't see any suspicious log message in what you'd posted. Anyhow,
    the TT does not matter here.

    --
    Harsh J

Regards, Xu
When you installed on Solaris:
- Did you syncronize the ntp server on all nodes:
  echo "server youservernetp.com" > /etc/inet/ntp.conf
  svcadm enable svc:/network/ntp:default

- Are you using the same Java version on both systems (Ubuntu and Solaris)?

- Can you test with one NN and two DN?



--
Marcos Luis Ortiz Valmaseda
 Software Engineer (Distributed Systems)
 http://uncubanitolinuxero.blogspot.com

Reply via email to