Hi,
We're having problems when trying to deal with the namenode failover, by 
following the wiki
    http://wiki.apache.org/hadoop/NameNodeFailover

If we point dfs.name.dir to 2 local directories, it works fine.
But, if one of the directories is NFS mounted, we're having these problems:

1) "hadoop dfs -ls" takes 1-2 minutes to finish, and returns error: 
    Bad connection to FS. command aborted.

2) When stop and restart hadoop, Namenode fails to start due to a zombie 
process.
    The previous Namenode process became a zombie. 

We're using hadoop-0.15.3_64.  

What is the correct way to set this up?  Really appreciate your input.

Thanks,
Nathan

Reply via email to