Thus spake Otis Gospodnetic::
Hi James,

You can put the same hadoop-site.xml on all machines. Yes, you do want a secondary NN - a single NN is a SPOF. Browser the archives a few days back to find an email from Paul about DRBD (disk replication) to avoid this SPOF.

Okay, thank you!  good to know (even though the documentation seems to state
that "secondary (NN) is a misnomer, since it never takes over for the primary
NN".

Now I have something interesting going on.  Given the following configuration
file, what am I doing wrong?  When I type "start-dfs.sh" on the namenode,
as instructed in the docs, I end up with, effectively, "Address already in use;
shutting down NameNode".

I do not understand this.  It's like it's trying to start it twice; netstat
shows no port 50070 in use after shutdown.

I feel like an idiot trying to wrap my mind around this!  What the heck am
I doing wrong?


<configuration>
<!-- HOST:PORT MAPPINGS -->
<property>
 <name>dfs.secondary.http.address</name>
 <value>0.0.0.0:50090</value>
 <description>
   The secondary namenode http server address and port.
   If the port is 0 then the server will start on a free port.
 </description>
</property>

<property>
 <name>dfs.datanode.address</name>
 <value>0.0.0.0:50010</value>
 <description>
   The address where the datanode server will listen to.
   If the port is 0 then the server will start on a free port.
 </description>
</property>

<property>
 <name>dfs.datanode.http.address</name>
 <value>0.0.0.0:50075</value>
 <description>
   The datanode http server address and port.
   If the port is 0 then the server will start on a free port.
 </description>
</property>

<property>
 <name>dfs.http.address</name>
 <value>idx2-r70:50070</value>
 <description>
   The address and the base port where the dfs namenode web ui will listen on.
   If the port is 0 then the server will start on a free port.
 </description>
</property>

<property>
 <name>mapred.job.tracker</name>
 <value>idx1-r70:50030</value>
 <description>The host and port that the MapReduce job tracker runs
 at.  If "local", then jobs are run in-process as a single map
 and reduce task.
 </description>
</property>

<property>
 <name>mapred.job.tracker.http.address</name>
 <value>idx1-r70:50030</value>
 <description>
   The job tracker http server address and port the server will listen on.
   If the port is 0 then the server will start on a free port.
 </description>
</property>


<property>
 <name>fs.default.name</name>
 <value>hdfs://idx2-r70:50070/</value>
 <description>The name of the default file system.  A URI whose
 scheme and authority determine the FileSystem implementation.  The
 uri's scheme determines the config property (fs.SCHEME.impl) naming
 the FileSystem implementation class.  The uri's authority is used to
 determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>
#######
--
James Graham (Greywolf)                                                       |
650.930.1138|925.768.4053                                                     *
[EMAIL PROTECTED]                                                             |
Check out what people are saying about SearchMe! -- click below
        http://www.searchme.com/stack/109aa

Reply via email to