Sean,

One cause I can think of is that your PID directory is on /tmp or so,
and the original saved PID files got cleared away by tmpwatch, leading
to this state.

To fix such a flaw, export HADOOP_PID_DIR in hadoop-env.sh to a more
persistent location (such as within HADOOP_HOME/pids itself, say).

What version of Hadoop are you using though?

On Mon, Apr 30, 2012 at 12:58 AM, Barry, Sean F <sean.f.ba...@intel.com> wrote:
> I just restarted my machines and it works fine now.
>
> -SB
>
> -----Original Message-----
> From: Harsh J [mailto:ha...@cloudera.com]
> Sent: Sunday, April 29, 2012 5:55 AM
> To: common-user@hadoop.apache.org
> Subject: Re: Can’t stop hadoop daemons
>
> Hey Barry,
>
> How did you start these daemons in the first place?
>
> On Sun, Apr 29, 2012 at 1:16 AM, Barry, Sean F <sean.f.ba...@intel.com> wrote:
>> hduser@master:~> /usr/java/jdk1.7.0/bin/jps
>>
>> 20907 TaskTracker
>>
>> 20629 SecondaryNameNode
>>
>> 25863 Jps
>>
>> 20777 JobTracker
>>
>> 20383 NameNode
>>
>> 20507 DataNode
>>
>> hduser@master:~> stop-
>>
>> stop-all.sh       stop-balancer.sh  stop-dfs.sh       stop-mapred.sh
>>
>> hduser@master:~> stop-all.sh
>>
>> no jobtracker to stop
>>
>> master: no tasktracker to stop
>>
>> slave: no tasktracker to stop
>>
>> no namenode to stop
>>
>> master: no datanode to stop
>>
>> slave: no datanode to stop
>>
>> master: no secondarynamenode to stop
>>
>> hduser@master:~>
>>
>> as you can see jps shows that the daemons are running but I cant stop them 
>> with the stop-all.sh command.
>>
>> Does anyone have an idea for why this is happening ?
>>
>> -SB
>
>
>
> --
> Harsh J



-- 
Harsh J

Reply via email to