Hi James,

You can put the same hadoop-site.xml on all machines.  Yes, you do want a 
secondary NN - a single NN is a SPOF.  Browser the archives a few days back to 
find an email from Paul about DRBD (disk replication) to avoid this SPOF.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: James Graham (Greywolf) <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sent: Wednesday, August 6, 2008 1:37:20 PM
> Subject: Configuration: I need help.
> 
> Seeing as there is no search function on the archives, I'm relegated
> to asking a possibly redundant question or four:
> 
> I have, as a sample setup:
> 
> idx1-tracker    JobTracker
> idx2-namenode   NameNode
> idx3-slave      DataTracker
> ...
> idx20-slave    DataTracker
> 
> Q1:     Can I put the same hadoop-site.xml file on all machines or do I need
>          to configure each machine separately?
> 
> Q2:     My current setup does not seem to find a primary namenode, but instead
>          wants to put idx1 and idx2 as secondary namenodes; as a result, I am
>          not getting anything usable on any of the web addresses (50030, 
> 50050,
>          50070, 50090).
> 
> Q3:     Possibly connected to Q1:  The current setup seems to go out and start
>          on all machines (masters/slaves); when I say "bin/start-mapred.sh" on
>          the JobTracker, I get the answer "jobtracker running...kill it 
> first".
> 
> Q4:     Do I even *need* a secondary namenode?
> 
> IWBN if I did not have to maintain three separate configuration files
> (jobtracker/namenode/datatracker).
> -- 
> James Graham (Greywolf)                                  |
> 650.930.1138|925.768.4053                              *
> [EMAIL PROTECTED]                                  |
> Check out what people are saying about SearchMe! -- click below
>     http://www.searchme.com/stack/109aa

Reply via email to