The following issues might be impacting you (from release notes)

http://issues.apache.org/jira/browse/HADOOP-2185

    HADOOP-2185.  RPC Server uses any available port if the specified
    port is zero. Otherwise it uses the specified port. Also combines
    the configuration attributes for the servers' bind address and
    port from "x.x.x.x" and "y" to "x.x.x.x:y".
    Deprecated configuration variables:
      dfs.info.bindAddress
      dfs.info.port
      dfs.datanode.bindAddress
      dfs.datanode.port
      dfs.datanode.info.bindAdress
      dfs.datanode.info.port
      dfs.secondary.info.bindAddress
      dfs.secondary.info.port
      mapred.job.tracker.info.bindAddress
      mapred.job.tracker.info.port
      mapred.task.tracker.report.bindAddress
      tasktracker.http.bindAddress
      tasktracker.http.port
    New configuration variables (post HADOOP-2404):
      dfs.secondary.http.address
      dfs.datanode.address
      dfs.datanode.http.address
      dfs.http.address
      mapred.job.tracker.http.address
      mapred.task.tracker.report.address
      mapred.task.tracker.http.address

-----Original Message-----
From: Dave Coyle [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 10, 2008 10:01 PM
To: core-user@hadoop.apache.org
Subject: Re: zombie data nodes, not alive but not dead

On 2008-03-10 23:37:36 -0400, [EMAIL PROTECTED] wrote:
> I can leave the cluster running  for hours and this slave will never 
> "register" itself with the namenode. I've been messing with this
problem 
> for three days now and I'm out of ideas. Any suggestions?

I had a similar-sounding problem with a 0.16.0 setup I had...
namenode thinks datanodes are dead, but the datanodes complain if
namenode is unreachable so there must be *some* connectivity.
Admittedly I haven't had the time yet to recreate what I did to see if
I had just mangled some config somewhere, but I was eventually able to
sort out my problem by...and yes, this sounds a bit wacky... running
a given datanode interactively, suspending it, then bringing it back
to the foreground.  E.g. (assuming your namenode is already running):

    $ bin/hadoop datanode
    <ctrl-Z>
    $ fg

and the datanode then magically registered with the namenode.

Give it a shot... I'm curious to hear if it works for you, too.

-Coyle

Reply via email to