Re: no jobtracker to stop,no namenode to stop

2013-08-30 Thread NJain
Hey Nikhil,

Just tried what you asked for and yes there are files and folders in
c:/Hadoop/name (folders: current, image, previous.checkpoint, in_use.lock)
and also tried with the firewall is disabled.

Just want to let you know one more thing that when on Jobtracker UI, I
click on '0' under column nodes (ref: my last post), I get the following
message:
---
localhost Hadoop Machine ListActive Task TrackersThere are currently no
known active Task Trackers.
==

Is it so that the task trackers are not starting?

I get the following message on start-all.sh:
---
$ start-all.sh
starting namenode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-namenode-XX.out
localhost: starting datanode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-datanode-XX.out
localhost: starting secondarynamenode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-secondarynamenode-XX.out
starting jobtracker, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-jobtracker-XX.out
*localhost: starting tasktracker, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-tasktracker-XX.out
*
==


when I
cat 
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-tasktracker-XX.out,
I get:
---
ulimit -a for user Nitesh
core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
file size   (blocks, -f) unlimited
open files  (-n) 256
pipe size(512 bytes, -p) 8
stack size  (kbytes, -s) 2023
cpu time   (seconds, -t) unlimited
max user processes  (-u) 256
virtual memory  (kbytes, -v) unlimited
==

but when I $stop-all.sh, I get:
---
stopping jobtracker
*localhost: no tasktracker to stop*
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
==

Do you know how can I verify that the tasktrackers are starting correctly?

Thanks,
Nitesh


On Fri, Aug 30, 2013 at 2:24 PM, Nikhil2405 [via Hadoop Common] 
ml-node+s472056n4024982...@n3.nabble.com wrote:

 Hi Nitesh,

 I think your localhost ip should be 127.0.0.0 (try this), also check
 whether, after starting ./start-all.sh it is writing any thing in this 
 *c:/Hadoop/name
 folder* or not, also check that you disabled your firewall, sometime it
 may cause problem to.


 Thanks

 Nikhil

 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024982.html
  To unsubscribe from no jobtracker to stop,no namenode to stop, click 
 herehttp://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=34874code=bml0ZXNoLmphaW44NUBnbWFpbC5jb218MzQ4NzR8MzUzNjEyNzQx
 .
 NAMLhttp://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4025014.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

2013-08-29 Thread NJain
Hi Nikhil,

Appreciate your quick response on this, but the issue still continues. I
believe I have covered all the pointers you have mentioned. Still I am
pasting the portions of the documents so that you can verify.

1. /etc/hosts file,  localhost should not be commented, and add ip address.
The entry looks like the below:
# localhost name resolution is handled within DNS itself.
127.0.0.1   localhost

2. core-site.xml, hdfs//localhost:port number
configuration
 property
 namefs.default.name/name
 valuehdfs://localhost:9000/value
 /property
/configuration
3. mapred-site.xml hdfs//localhost:port number mapred.local.dir
configuration
 property
 namemapred.job.tracker/name
 valuelocalhost:9001/value
 /property
/configuration

4. hdfs-site.xml 1.replication factor should be one
  include dfs.name.dir property
dfs.data.dir property
for both the property check on net
configuration
 property
 namedfs.replication/name
 value1/value
 /property
 property
 namedfs.name.dir/name
 valuec:/Hadoop/name/value
 /property
 property
 namedfs.data.dir/name
 valuec:/Hadoop/data/value
 /property
/configuration


I am getting stuck at:
13/08/30 11:39:26 WARN mapred.JobClient: No job jar file set.  User classes
may not be found. See JobConf(Class) or JobConf#setJar(String).
13/08/30 11:39:26 INFO input.FileInputFormat: Total input paths to process
: 1
13/08/30 11:39:26 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
13/08/30 11:39:26 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/30 11:39:27 INFO mapred.JobClient: Running job: job_201308301135_0002
13/08/30 11:39:28 INFO mapred.JobClient:  map 0% reduce 0%

My Jobtracker UI looks like this:

Cluster Summary (Heap Size is 120.06 MB/888.94 MB)Running Map TasksRunning
Reduce TasksTotal SubmissionsNodesOccupied Map SlotsOccupied Reduce
SlotsReserved
Map SlotsReserved Reduce SlotsMap Task CapacityReduce Task CapacityAvg.
Tasks/NodeBlacklisted NodesGraylisted NodesExcluded
Nodes0010http://localhost:50030/machines.jsp?type=active
00-0 
http://localhost:50030/machines.jsp?type=blacklisted0http://localhost:50030/machines.jsp?type=graylisted
0 http://localhost:50030/machines.jsp?type=excluded



I have a feeling that the jobtracker is not able to find the task tracker
as there is a 0 in nodes column.

Does this ring any bells to you?

Thanks,
Nitesh Jain



On Thu, Aug 29, 2013 at 5:51 PM, Nikhil2405 [via Hadoop Common] 
ml-node+s472056n4024848...@n3.nabble.com wrote:

 Hi Nitesh,

 I think your problem may be in your configuration, so check your files as
 follow

 1. /etc/hosts file,  localhost should not be commented, and add ip
 address.
 2. core-site.xml, hdfs//localhost:port number
 3. mapred-site.xml hdfs//localhost:port number mapred.local.dir
 4. hdfs-site.xml 1.replication factor should be one
   include dfs.name.dir property
 dfs.data.dir property
 for both the property check on net

 Thanks

 Nikhil

 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024848.html
  To unsubscribe from no jobtracker to stop,no namenode to stop, click 
 herehttp://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=34874code=bml0ZXNoLmphaW44NUBnbWFpbC5jb218MzQ4NzR8MzUzNjEyNzQx
 .
 NAMLhttp://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024979.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

2013-08-28 Thread NJain
Hi,

I am facing an issue where the map job is stuck at map 0% reduce 0%.

I have installed Hadoop version 1.2.1 and am trying to run on my windows 8
machine using cygwin in pseudo distribution mode. I have followed the
instruction at: http://hadoop.apache.org/docs/stable/single_node_setup.html 
and have copied the configuration files from there itself. 

When I do stop-all.sh, I observe output as below: 
stopping jobtracker
*localhost: no tasktracker to stop*
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode

Can anyone please help or suggest me on this? I am stuck with this for a
while now. 

Thanks,
Nitesh 
   



--
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024745.html
Sent from the Users mailing list archive at Nabble.com.


Re: no jobtracker to stop,no namenode to stop

2013-01-21 Thread Harsh J
In spirit of http://xkcd.com/979/, please also let us know what you felt
was the original issue and how you managed to solve it - for benefit of
other people searching in future?


On Mon, Jan 21, 2013 at 3:26 PM, Sigehere peopleman...@gmail.com wrote:

 Hey, Friends I have solved that error
 Thanks




 --
 View this message in context:
 http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4006830.html
 Sent from the Users mailing list archive at Nabble.com.




-- 
Harsh J


Re: no jobtracker to stop,no namenode to stop

2011-02-15 Thread ursbrbalaji

Hi Madhu,

Thanks for the response, sorry was busy couldn't check.

My mapred-site.xml is as follows.

Let me know the suggested changes to be done.

THanks in advance.

?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?

!-- Put site-specific property overrides in this file. --

!-- In: conf/mapred-site.xml --
configuration
property
  namemapred.job.tracker/name
  valuelocalhost:54311/value
  descriptionThe host and port that the MapReduce job tracker runs
  at.  If local, then jobs are run in-process as a single map
  and reduce task.
  /description
/property
/configuration

- B R Balaji
-- 
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2500297.html
Sent from the Users mailing list archive at Nabble.com.


Re: no jobtracker to stop,no namenode to stop

2011-02-09 Thread ursbrbalaji


Hi Madhu,

The jobtracker logs show the following exception.

2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
STARTUP_MSG: 
/
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=JobTracker, port=54311
2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 50030
2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50030
webServer.getConnectors()[0].getLocalPort() returned 50030
2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50030
2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50030
2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
up at: 54311
2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
webserver: 50030
2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory: null
java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at 
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:170)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1665)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
Caused by: java.net.ConnectException: Connection refused
at 

Re: no jobtracker to stop,no namenode to stop

2011-02-09 Thread madhu phatak
IP address with not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji ursbrbal...@gmail.com wrote:



 Hi Madhu,

 The jobtracker logs show the following exception.

 2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
 STARTUP_MSG:
 /
 STARTUP_MSG: Starting JobTracker
 STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.2
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
 /
 2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
 configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
 limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
 2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
 Initializing RPC Metrics with hostName=JobTracker, port=54311
 2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
 returned by webServer.getConnectors()[0].getLocalPort() before open() is
 -1.
 Opening the listener on 50030
 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
 listener.getLocalPort() returned 50030
 webServer.getConnectors()[0].getLocalPort() returned 50030
 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty bound
 to port 50030
 2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
 2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
 SelectChannelConnector@0.0.0.0:50030
 2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
 Initializing JVM Metrics with processName=JobTracker, sessionId=
 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
 JobTracker
 up at: 54311
 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
 JobTracker
 webserver: 50030
 2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
 2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
 2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
 2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
 2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
 2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
 2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
 2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
 2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
 2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying connect
 to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
 2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
 cleaning system directory: null
 java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
 connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at
 org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:170)
at

 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1665)
at
 org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
at
 

Re: no jobtracker to stop,no namenode to stop

2011-02-09 Thread madhu phatak
IP address wiLL not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 9:58 PM, madhu phatak phatak@gmail.com wrote:


 IP address with not work ..You have to put the hostnames in every
 configuration file.

 On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji ursbrbal...@gmail.com wrote:



 Hi Madhu,

 The jobtracker logs show the following exception.

 2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
 STARTUP_MSG:
 /
 STARTUP_MSG: Starting JobTracker
 STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.2
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
 /
 2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker:
 Scheduler
 configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
 limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
 2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
 Initializing RPC Metrics with hostName=JobTracker, port=54311
 2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
 returned by webServer.getConnectors()[0].getLocalPort() before open() is
 -1.
 Opening the listener on 50030
 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
 listener.getLocalPort() returned 50030
 webServer.getConnectors()[0].getLocalPort() returned 50030
 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty
 bound
 to port 50030
 2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
 2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
 SelectChannelConnector@0.0.0.0:50030
 2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
 Initializing JVM Metrics with processName=JobTracker, sessionId=
 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
 JobTracker
 up at: 54311
 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
 JobTracker
 webserver: 50030
 2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
 2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
 2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
 2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
 2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
 2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
 2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
 2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
 2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
 2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying
 connect
 to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
 2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
 cleaning system directory: null
 java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
 connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at
 org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:170)
at

 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at 

Re: no jobtracker to stop,no namenode to stop

2011-02-08 Thread ursbrbalaji

Hi Prabhu,

I am facing exactly the same problem. I too followed the steps in the below
link.

Please let me know which configuration file was modified and what were the
changes.

Thanks,
Balaji


-- 
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2450308.html
Sent from the Users mailing list archive at Nabble.com.


Re: no jobtracker to stop,no namenode to stop

2011-02-08 Thread madhu phatak
Please see the job tracker logs

On Tue, Feb 8, 2011 at 3:54 PM, ursbrbalaji ursbrbal...@gmail.com wrote:


 Hi Prabhu,

 I am facing exactly the same problem. I too followed the steps in the below
 link.

 Please let me know which configuration file was modified and what were the
 changes.

 Thanks,
 Balaji


 --
 View this message in context:
 http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2450308.html
 Sent from the Users mailing list archive at Nabble.com.



Re: no jobtracker to stop,no namenode to stop

2011-02-08 Thread ursbrbalaji

Hi Prabhu,

I am facing exactly the same problem. I too followed the steps in the below
link.

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
 

Please let me know which configuration file was modified and what were the
changes.

Thanks,
Balaji 
-- 
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2450315.html
Sent from the Users mailing list archive at Nabble.com.


Re: no jobtracker to stop,no namenode to stop

2011-02-08 Thread ahmed nagy

I am facing the same problem I look in the log files I find this that error 

FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException:
Problem binding to cannonau.isti.cnr.it/146.48.82.190:9001 : Address already
in use

I also did a netstat to see whether the port is in use but it does not show
that the port is in use I also changed the port and did another netstat and
the error is the same 
any ideas ? please help
when i stop hadoop here is what i get  there is no namenode to stop and
there is no job tracker
ahmednagy@cannonau:~/HadoopStandalone/hadoop-0.21.0/bin$ ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
no namenode to stop
n01: stopping datanode
n02: stopping datanode
n07: stopping datanode
n06: stopping datanode
n03: stopping datanode
n04: stopping datanode
n08: stopping datanode
n05: stopping datanode
localhost: no secondarynamenode to stop
no jobtracker to stop
n03: stopping tasktracker
n01: stopping tasktracker
n04: stopping tasktracker
n06: stopping tasktracker
n02: stopping tasktracker
n05: stopping tasktracker
n08: stopping tasktracker
n07: stopping tasktracker



I made jps

and here is the result
1580 Jps
20972 RunJar
22216 RunJar


2011-02-08 15:25:45,610 INFO org.apache.hadoop.mapred.JobTracker:
STARTUP_MSG:
/
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = cannonau.isti.cnr.it/146.48.82.190
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.21.0
STARTUP_MSG:   classpath =
/home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
/
2011-02-08 15:25:46,737 INFO org.apache.hadoop.security.Groups: Group
mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=3000$
2011-02-08 15:25:46,752 INFO org.apache.hadoop.mapred.JobTracker: Starting
jobtracker with owner as ahmednagy and supergroup as supergroup
2011-02-08 15:25:46,755 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generatin$
2011-02-08 15:25:46,758 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Starting expired delegation token remover thr$
2011-02-08 15:25:46,759 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generatin$
2011-02-08 15:25:46,760 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
limitMaxMemFor$
2011-02-08 15:25:46,762 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2011-02-08 15:25:46,791 INFO org.apache.hadoop.mapred.QueueManager:
AllQueues : {default=default}; LeafQueues : {default=default}
2011-02-08 15:25:46,873
FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException:
Problem binding to cannonau.isti.cnr.it/146.48.82.190:9001 : Address already
in use
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.init(Server.java:289)
at org.apache.hadoop.ipc.Server.init(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:343)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.init(WritableRpcEngine.java:324)
at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1450)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more

2011-02-08 15:25:46,875 INFO org.apache.hadoop.mapred.JobTracker:
SHUTDOWN_MSG:

-- 
View this message in context: 
http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2452336.html
Sent from the Users mailing list archive at Nabble.com.


Re: no jobtracker to stop,no namenode to stop

2011-02-08 Thread rahul patodi
Please check
1. Required port should be free
2. Another instance of hadoop should not be running


On Tue, Feb 8, 2011 at 9:58 PM, ahmed nagy ahmed_said_n...@hotmail.comwrote:


 I am facing the same problem I look in the log files I find this that error

 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException:
 Problem binding to cannonau.isti.cnr.it/146.48.82.190:9001 : Address
 already
 in use

 I also did a netstat to see whether the port is in use but it does not show
 that the port is in use I also changed the port and did another netstat and
 the error is the same
 any ideas ? please help
 when i stop hadoop here is what i get  there is no namenode to stop and
 there is no job tracker
 ahmednagy@cannonau:~/HadoopStandalone/hadoop-0.21.0/bin$ ./stop-all.sh
 This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
 no namenode to stop
 n01: stopping datanode
 n02: stopping datanode
 n07: stopping datanode
 n06: stopping datanode
 n03: stopping datanode
 n04: stopping datanode
 n08: stopping datanode
 n05: stopping datanode
 localhost: no secondarynamenode to stop
 no jobtracker to stop
 n03: stopping tasktracker
 n01: stopping tasktracker
 n04: stopping tasktracker
 n06: stopping tasktracker
 n02: stopping tasktracker
 n05: stopping tasktracker
 n08: stopping tasktracker
 n07: stopping tasktracker



 I made jps

 and here is the result
 1580 Jps
 20972 RunJar
 22216 RunJar


 2011-02-08 15:25:45,610 INFO org.apache.hadoop.mapred.JobTracker:
 STARTUP_MSG:
 /
 STARTUP_MSG: Starting JobTracker
 STARTUP_MSG:   host = cannonau.isti.cnr.it/146.48.82.190
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.21.0
 STARTUP_MSG:   classpath =

 /home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
 /
 2011-02-08 15:25:46,737 INFO org.apache.hadoop.security.Groups: Group
 mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
 cacheTimeout=3000$
 2011-02-08 15:25:46,752 INFO org.apache.hadoop.mapred.JobTracker: Starting
 jobtracker with owner as ahmednagy and supergroup as supergroup
 2011-02-08 15:25:46,755 INFO

 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generatin$
 2011-02-08 15:25:46,758 INFO

 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Starting expired delegation token remover thr$
 2011-02-08 15:25:46,759 INFO

 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generatin$
 2011-02-08 15:25:46,760 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
 configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
 limitMaxMemFor$
 2011-02-08 15:25:46,762 INFO org.apache.hadoop.util.HostsFileReader:
 Refreshing hosts (include/exclude) list
 2011-02-08 15:25:46,791 INFO org.apache.hadoop.mapred.QueueManager:
 AllQueues : {default=default}; LeafQueues : {default=default}
 2011-02-08 15:25:46,873
 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException:
 Problem binding to cannonau.isti.cnr.it/146.48.82.190:9001 : Address
 already
 in use
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.init(Server.java:289)
at org.apache.hadoop.ipc.Server.init(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.init(RPC.java:343)
at

 org.apache.hadoop.ipc.WritableRpcEngine$Server.init(WritableRpcEngine.java:324)
at

 org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at

 org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1450)
at
 org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
at
 org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
at
 org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
 Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more

 2011-02-08 15:25:46,875 INFO org.apache.hadoop.mapred.JobTracker:
 SHUTDOWN_MSG:

 --
 View this message in context:
 

Re: no jobtracker to stop,no namenode to stop

2009-11-17 Thread Prabhu Hari Dhanapal
@ Nitesh , Jeff  Ed

Thanks guys !! It was a mistake in the configuration  file !! It works now !
..

8408 Jps
8109 DataNode
8370 TaskTracker
8204 SecondaryNameNode
8281 JobTracker


Except for  TaskTracker$Child  !!




On Mon, Nov 16, 2009 at 10:57 AM, Edward Capriolo edlinuxg...@gmail.comwrote:

 On Mon, Nov 16, 2009 at 9:57 AM, Jeff Zhang zjf...@gmail.com wrote:
  look at the logs of job tracker, maybe you will get some clues.
 
 
  Jeff Zhang
 
 
 
  On Mon, Nov 16, 2009 at 6:45 AM, Prabhu Hari Dhanapal 
  dragonzsn...@gmail.com wrote:
 
  Hi all,
 
  I just installed Hadoop(single node cluster) and tried to start and stop
  the
  nodes , and it said
  no jobtracker to stop , no namenode to stop
 
  however the tutorial i used suggest that jobtracker and namenodes should
  also have started ? Why does this happen?
  am i missing something?
 
 
 
 http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
 
 http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
 
 
 
  had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ jps
  20671 Jps
  20368 DataNode
  20463 SecondaryNameNode
 
  had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ ./stop-all.sh
  no jobtracker to stop
  localhost: no tasktracker to stop
  no namenode to stop
  localhost: stopping datanode
  localhost: stopping secondarynamenode
 
 
 
 
  --
  Hari
 
 


 The issue here is that these resources failed to start. What happens
 here is as soon as the java process is started the system returns an
 ok status to the script. However the processes die moments later as
 they start up.

 For example if you start the namenode, script returns ok, namenode
 runs and realizes its dfs.name directory is not formatted. Then it
 stops.

 Generally after starting a hadoop process, tail the log it creates for
 a few seconds and make sure it REALLY starts up. Really the scripts
 should do more pre-startup checking, but the scripts could not test
 for every possible condition that could cause hadoop not to start.

 Also for long running deamons the pid files are written to /tmp see
 bin/hadoop-daemon.sh
 If something is cleaning /tmp stop arguments are unable to find the pid.

 That is shell scripting for you :)
 Edward




-- 
Hari


Re: no jobtracker to stop,no namenode to stop

2009-11-16 Thread Nitesh Bhatia
Check if JAVA_HOME is pointing to /usr/lib/jvm/java-6-sun

On Mon, Nov 16, 2009 at 8:15 PM, Prabhu Hari Dhanapal 
dragonzsn...@gmail.com wrote:

 Hi all,

 I just installed Hadoop(single node cluster) and tried to start and stop
 the
 nodes , and it said
 no jobtracker to stop , no namenode to stop

 however the tutorial i used suggest that jobtracker and namenodes should
 also have started ? Why does this happen?
 am i missing something?


 http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)


 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ jps
 20671 Jps
 20368 DataNode
 20463 SecondaryNameNode

 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ ./stop-all.sh
 no jobtracker to stop
 localhost: no tasktracker to stop
 no namenode to stop
 localhost: stopping datanode
 localhost: stopping secondarynamenode




 --
 Hari




-- 
Nitesh Bhatia

Life is never perfect. It just depends where you draw the line.

http://www.linkedin.com/in/niteshbhatia
http://www.twitter.com/niteshbhatia


Re: no jobtracker to stop,no namenode to stop

2009-11-16 Thread Jeff Zhang
look at the logs of job tracker, maybe you will get some clues.


Jeff Zhang



On Mon, Nov 16, 2009 at 6:45 AM, Prabhu Hari Dhanapal 
dragonzsn...@gmail.com wrote:

 Hi all,

 I just installed Hadoop(single node cluster) and tried to start and stop
 the
 nodes , and it said
 no jobtracker to stop , no namenode to stop

 however the tutorial i used suggest that jobtracker and namenodes should
 also have started ? Why does this happen?
 am i missing something?


 http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29


 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ jps
 20671 Jps
 20368 DataNode
 20463 SecondaryNameNode

 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ ./stop-all.sh
 no jobtracker to stop
 localhost: no tasktracker to stop
 no namenode to stop
 localhost: stopping datanode
 localhost: stopping secondarynamenode




 --
 Hari



Re: no jobtracker to stop,no namenode to stop

2009-11-16 Thread Edward Capriolo
On Mon, Nov 16, 2009 at 9:57 AM, Jeff Zhang zjf...@gmail.com wrote:
 look at the logs of job tracker, maybe you will get some clues.


 Jeff Zhang



 On Mon, Nov 16, 2009 at 6:45 AM, Prabhu Hari Dhanapal 
 dragonzsn...@gmail.com wrote:

 Hi all,

 I just installed Hadoop(single node cluster) and tried to start and stop
 the
 nodes , and it said
 no jobtracker to stop , no namenode to stop

 however the tutorial i used suggest that jobtracker and namenodes should
 also have started ? Why does this happen?
 am i missing something?


 http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29


 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ jps
 20671 Jps
 20368 DataNode
 20463 SecondaryNameNode

 had...@pdhanapa-laptop:/home/pdhanapa/Desktop/hadoop/bin$ ./stop-all.sh
 no jobtracker to stop
 localhost: no tasktracker to stop
 no namenode to stop
 localhost: stopping datanode
 localhost: stopping secondarynamenode




 --
 Hari




The issue here is that these resources failed to start. What happens
here is as soon as the java process is started the system returns an
ok status to the script. However the processes die moments later as
they start up.

For example if you start the namenode, script returns ok, namenode
runs and realizes its dfs.name directory is not formatted. Then it
stops.

Generally after starting a hadoop process, tail the log it creates for
a few seconds and make sure it REALLY starts up. Really the scripts
should do more pre-startup checking, but the scripts could not test
for every possible condition that could cause hadoop not to start.

Also for long running deamons the pid files are written to /tmp see
bin/hadoop-daemon.sh
If something is cleaning /tmp stop arguments are unable to find the pid.

That is shell scripting for you :)
Edward