Because for large clusters we have to run namenode in a single node,
datanode in another nodes
So we can start namenode and jobtracker in master node and datanode n
tasktracker in slave nodes

For getting more clarity You can check the service status after starting

Verify these:
dfs.name.dir hdfs:hadoop drwx------
dfs.data.dir hdfs:hadoop drwx------

mapred.local.dir mapred:hadoop drwxr-xr-x

Please follow each steps in this link
https://ccp.cloudera.com/display/CDHDOC/CDH3+Deployment+on+a+Cluster
 On Mar 15, 2012 9:52 PM, "Manish Bhoge" <manishbh...@rocketmail.com> wrote:

> Ys, I understand the order and I formatted namenode before starting
> services. As I suspect there may be ownership and an access issue. Not able
> to nail down issue exactly. I also have question why there are 2 routes to
> start services. When we have start-all.sh script then why need to go to
> init.d to start services??
>
>
> Thank you,
> Manish
> Sent from my BlackBerry, pls excuse typo
>
> -----Original Message-----
> From: Manu S <manupk...@gmail.com>
> Date: Thu, 15 Mar 2012 21:43:26
> To: <common-user@hadoop.apache.org>; <manishbh...@rocketmail.com>
> Reply-To: common-user@hadoop.apache.org
> Subject: Re: Issue when starting services on CDH3
>
> Did you check the service status?
> Is it like "dead, but pid exist"?
>
> Did you check the ownership and permissions for the
> dfs.name.dir,dfs.data.dir,mapped.local.dir etc ?
>
> The order for starting daemons are like this:
> 1 namenode
> 2 datanode
> 3 jobtracker
> 4 tasktracker
>
> Did you format the namenode before starting?
> On Mar 15, 2012 9:31 PM, "Manu S" <manupk...@gmail.com> wrote:
>
> > Dear manish
> > Which daemons are not starting?
> >
> > On Mar 15, 2012 9:21 PM, "Manish Bhoge" <manishbh...@rocketmail.com>
> > wrote:
> > >
> > > I have CDH3 installed in standalone mode. I have install all hadoop
> > components. Now when I start services (namenode,secondary namenode,job
> > tracker,task tracker) I can start gracefully from /usr/lib/hadoop/
> > ./bin/start-all.sh. But when start the same servises from
> > /etc/init.d/hadoop-0.20-* then I unable to start. Why? Now I want to
> start
> > Hue also which is in init.d that also I couldn't start. Here I suspect
> > authentication issue. Because all the services in init.d are under root
> > user and root group. Please suggest I am stuck here. I tried hive and it
> > seems it running fine.
> > > Thanks
> > > Manish.
> > > Sent from my BlackBerry, pls excuse typo
> > >
> >
>
>

Reply via email to