Hi,
I've forgotten to start start-mapred.sh

Thanks All

-----Original Message-----
From: Cavus,M.,Fa. Post Direkt [mailto:m.ca...@postdirekt.de] 
Sent: Friday, December 31, 2010 10:20 AM
To: common-user@hadoop.apache.org
Subject: RE: Retrying connect to server

Hi,
I do get this:
$ jps
6017 DataNode
5805 NameNode
6234 SecondaryNameNode
6354 Jps

What can I do to start JobTracker?

Here my config Files:
$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at.</description>
</property>
 </configuration>


cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The actual number of replications can be specified when the
file is created.</description>
</property>

</configuration>

$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
</description>
</property>
</configuration>

-----Original Message-----
From: James Seigel [mailto:ja...@tynt.com] 
Sent: Friday, December 31, 2010 4:56 AM
To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server

Or....
        3) The configuration (or lack thereof) on the machine you are trying to 
run this, has no idea where your DFS or JobTracker  is :)

Cheers
James.

On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:

> Cavus,M.,Fa. Post Direkt wrote:
>> I process this
>> 
>> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount 
>> gutenberg gutenberg-output
>> 
>> I get this
>> Dıd anyone know why I get this Error?
>> 
>> 10/12/30 16:48:59 INFO security.Groups: Group mapping 
>> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
>> cacheTimeout=300000
>> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 0 time(s).
>> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 1 time(s).
>> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 2 time(s).
>> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 3 time(s).
>> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 4 time(s).
>> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 5 time(s).
>> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 6 time(s).
>> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 7 time(s).
>> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 8 time(s).
>> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: 
>> localhost/127.0.0.1:9001. Already tried 9 time(s).
>> Exception in thread "main" java.net.ConnectException: Call to 
>> localhost/127.0.0.1:9001 failed on connection exception: 
>> java.net.ConnectException: Connection refused
>>      at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
>>      at org.apache.hadoop.ipc.Client.call(Client.java:908)
>>      at 
>> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
>>      at $Proxy0.getProtocolVersion(Unknown Source)
>>      at 
>> org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
>>      at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
>>      at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
>>      at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
>>      at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
>>      at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
>>      at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
>> Caused by: java.net.ConnectException: Connection refused
>>      at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>      at 
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>      at 
>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
>>      at 
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
>>      at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
>>      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
>>      at org.apache.hadoop.ipc.Client.call(Client.java:885)
>>      ... 15 more
>>  
> This is the most common issue occured after configuring Hadoop Cluster.
> 
> Reason :
> 
> 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps 
> commands.
> 2. DNS Resolution. You must have IP hostname enteries if all nodes in 
> /etc/hosts file.
> 
> 
> 
> Best Regards
> 
> Adarsh Sharma

Reply via email to