the masters file only contains the secondary namenodes.
when you start-dfs.sh or start-all, the namenode, which is the master, is
started on the local machine, and secondary namenodes are started on each
host listed in conf/masters

This now confusing pattern is probably the result of some historical
requirement that we are unaware of.

Here are the relevant lines from bin/start-dfs.sh

# start dfs daemons
# start namenode after datanodes, to minimize time namenode is up w/o data
# note: datanodes will log connection errors until namenode starts
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
$nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode
$dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start
secondarynamenode


On Thu, May 14, 2009 at 11:36 PM, Ninad Raut <hbase.user.ni...@gmail.com>wrote:

> But if we have two master in the master file we have master and secondary
> node, *both *processes getting started on the two servers listed. Cant we
> have master and secondary node started seperately on two machines??
>
> On Fri, May 15, 2009 at 9:39 AM, jason hadoop <jason.had...@gmail.com
> >wrote:
>
> > I agree with billy. conf/masters is misleading as the place for secondary
> > namenodes.
> >
> > On Thu, May 14, 2009 at 8:38 PM, Billy Pearson
> > <sa...@pearsonwholesale.com>wrote:
> >
> > > I thank the secondary namenode is set in the masters file in the conf
> > > folder
> > > misleading
> > >
> > > Billy
> > >
> > >
> > >
> > > "Rakhi Khatwani" <rakhi.khatw...@gmail.com> wrote in message
> > > news:384813770905140603g4d552834gcef2db3028a00...@mail.gmail.com...
> > >
> > >  Hi,
> > >>    I wanna set up a cluster of 5 nodes in such a way that
> > >> node1 - master
> > >> node2 - secondary namenode
> > >> node3 - slave
> > >> node4 - slave
> > >> node5 - slave
> > >>
> > >>
> > >> How do we go about that?
> > >> there is no property in hadoop-env where i can set the ip-address for
> > >> secondary name node.
> > >>
> > >> if i set node-1 and node-2 in masters, and when we start dfs, in both
> > the
> > >> m/cs, the namenode n secondary namenode processes r present. but i
> think
> > >> only node1 is active.
> > >> n my namenode fail over operation fails.
> > >>
> > >> ny suggesstions?
> > >>
> > >> Regards,
> > >> Rakhi
> > >>
> > >>
> > >
> > >
> >
> >
> > --
> > Alpha Chapters of my book on Hadoop are available
> > http://www.apress.com/book/view/9781430219422
> > www.prohadoopbook.com a community for Hadoop Professionals
> >
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to