But if we have two master in the master file we have master and secondary
node, *both *processes getting started on the two servers listed. Cant we
have master and secondary node started seperately on two machines??

On Fri, May 15, 2009 at 9:39 AM, jason hadoop <jason.had...@gmail.com>wrote:

> I agree with billy. conf/masters is misleading as the place for secondary
> namenodes.
>
> On Thu, May 14, 2009 at 8:38 PM, Billy Pearson
> <sa...@pearsonwholesale.com>wrote:
>
> > I thank the secondary namenode is set in the masters file in the conf
> > folder
> > misleading
> >
> > Billy
> >
> >
> >
> > "Rakhi Khatwani" <rakhi.khatw...@gmail.com> wrote in message
> > news:384813770905140603g4d552834gcef2db3028a00...@mail.gmail.com...
> >
> >  Hi,
> >>    I wanna set up a cluster of 5 nodes in such a way that
> >> node1 - master
> >> node2 - secondary namenode
> >> node3 - slave
> >> node4 - slave
> >> node5 - slave
> >>
> >>
> >> How do we go about that?
> >> there is no property in hadoop-env where i can set the ip-address for
> >> secondary name node.
> >>
> >> if i set node-1 and node-2 in masters, and when we start dfs, in both
> the
> >> m/cs, the namenode n secondary namenode processes r present. but i think
> >> only node1 is active.
> >> n my namenode fail over operation fails.
> >>
> >> ny suggesstions?
> >>
> >> Regards,
> >> Rakhi
> >>
> >>
> >
> >
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
> www.prohadoopbook.com a community for Hadoop Professionals
>

Reply via email to