Can you paste logs that say "jobtracker" from $HADOOP_HOME/logs/ into pastebin.com and pass back the pasted link? Looks like your MR services aren't starting for some reason and logs will tell you why.
On Wed, May 16, 2012 at 7:08 PM, waqas latif <waqas...@gmail.com> wrote: > Hi Harsh, > I run this. but still there is no jobtracker and tasktracker in the jps > output. I only have datanode, namenode and secondarynamenode in jps output. > > waqas > > On Wed, May 16, 2012 at 3:28 PM, Harsh J <ha...@cloudera.com> wrote: > >> You have configured a MR cluster, but it isn't up. >> >> Run: >> >> bin/start-mapred.sh >> >> Then check for "JobTracker" and "TaskTracker" in 'jps' output. >> >> Then re-run your example pi job, and it should go through. >> >> On Wed, May 16, 2012 at 6:51 PM, waqas latif <waqas...@gmail.com> wrote: >> > Hi I am trying to configure Hadoop 1.0. in pseudodistributed mode. >> > >> > But when I run the pi example given in the hadoop distribution, I get the >> > error mentioned in title. Can someone please help me and guide me how >> can I >> > fix this problem. Also its a request that please suggest solution as well >> > if possible along with pinpointing the problem. >> > >> > here is what i get by running jps >> > >> > 8322 Jps >> > 7611 SecondaryNameNode >> > 7474 DataNode >> > 7341 NameNode >> > >> > Here is complete error message. >> > >> > Number of Maps = 10 >> > Samples per Map = 100 >> > Wrote input for Map #0 >> > Wrote input for Map #1 >> > Wrote input for Map #2 >> > Wrote input for Map #3 >> > Wrote input for Map #4 >> > Wrote input for Map #5 >> > Wrote input for Map #6 >> > Wrote input for Map #7 >> > Wrote input for Map #8 >> > Wrote input for Map #9 >> > Starting Job >> > 12/05/16 13:11:56 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 0 time(s). >> > 12/05/16 13:11:57 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 1 time(s). >> > 12/05/16 13:11:58 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 2 time(s). >> > 12/05/16 13:11:59 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 3 time(s). >> > 12/05/16 13:12:00 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 4 time(s). >> > 12/05/16 13:12:01 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 5 time(s). >> > 12/05/16 13:12:02 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 6 time(s). >> > 12/05/16 13:12:03 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 7 time(s). >> > 12/05/16 13:12:04 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 8 time(s). >> > 12/05/16 13:12:05 INFO ipc.Client: Retrying connect to server: localhost/ >> > 127.0.0.1:8021. Already tried 9 time(s). >> > java.net.ConnectException: Call to localhost/127.0.0.1:8021 failed on >> > connection exception: java.net.ConnectException: Connection refused >> > at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095) >> > at org.apache.hadoop.ipc.Client.call(Client.java:1071) >> > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) >> > at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown >> > Source) >> > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) >> > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) >> > at >> > org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480) >> > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474) >> > at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457) >> > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1260) >> > at >> > org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297) >> > at >> org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342) >> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) >> > at >> org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351) >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> > at >> > >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> > at >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> > at java.lang.reflect.Method.invoke(Method.java:601) >> > at >> > >> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) >> > at >> > org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) >> > at >> > org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> > at >> > >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> > at >> > >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> > at java.lang.reflect.Method.invoke(Method.java:601) >> > at org.apache.hadoop.util.RunJar.main(RunJar.java:156) >> > Caused by: java.net.ConnectException: Connection refused >> > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >> > at >> > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) >> > at >> > >> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656) >> > at >> > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434) >> > at >> > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560) >> > at >> > org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184) >> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202) >> > at org.apache.hadoop.ipc.Client.call(Client.java:1046) >> > ... 24 more >> >> >> >> -- >> Harsh J >> -- Harsh J