This happens when the processes are running, but the PID files either don't exist or have invalid PIDs in them. Check on these files on each of your hosts. The location of the PID file is specified in conf/hadoop-env.sh.
On Tue, Jun 9, 2009 at 9:28 AM, HRoger <hanxianyongro...@163.com> wrote: > > May be PID's problem! > > Piotr Praczyk wrote: > > > > Hi > > > > I encountered a very peculiar behavior of my Hadoop cluster > > > > When I try to stop it, it claims not to be started > > > > test5.dev ~ # stop-all.sh > > no jobtracker to stop > > test6.dev: no tasktracker to stop > > test5.dev: no tasktracker to stop > > no namenode to stop > > test5.dev: no datanode to stop > > test6.dev: no datanode to stop > > test5.dev: no secondarynamenode to stop > > > > > > (the same comes for HBase) > > > > In the same time, I am able to submit the tasks, access hdfs, hbase. > > Do you have any idea, what can be the reason of such state ? > > I am running the scripts from the same node, I started hadoop from > > > > > > cheers > > Piotr > > > > > > -- > View this message in context: > http://www.nabble.com/Starting-stopping-hadoop-tp23941355p23946500.html > Sent from the Hadoop core-user mailing list archive at Nabble.com. > >